891 resultados para Computer aided software engineering
Resumo:
OBJECTIVE To analyze the precision of fit of implant-supported screw-retained computer-aided-designed and computer-aided-manufactured (CAD/CAM) zirconium dioxide (ZrO) frameworks. MATERIALS AND METHODS Computer-aided-designed and computer-aided-manufactured ZrO frameworks (NobelProcera) for a screw-retained 10-unit implant-supported reconstruction on six implants (FDI positions 15, 13, 11, 21, 23, 25) were fabricated using a laser (ZrO-L, N = 6) and a mechanical scanner (ZrO-M, N = 5) for digitizing the implant platform and the cuspid-supporting framework resin pattern. Laser-scanned CAD/CAM titanium (TIT-L, N = 6) and cast CoCrW-alloy frameworks (Cast, N = 5) fabricated on the same model and designed similar to the ZrO frameworks were the control. The one-screw test (implant 25 screw-retained) was applied to assess the vertical microgap between implant and framework platform with a scanning electron microscope. The mean microgap was calculated from approximal and buccal values. Statistical comparison was performed with non-parametric tests. RESULTS No statistically significant pairwise difference was observed between the relative effects of vertical microgap between ZrO-L (median 14 μm; 95% CI 10-26 μm), ZrO-M (18 μm; 12-27 μm) and TIT-L (15 μm; 6-18 μm), whereas the values of Cast (236 μm; 181-301 μm) were significantly higher (P < 0.001) than the three CAD/CAM groups. A monotonous trend of increasing values from implant 23 to 15 was observed in all groups (ZrO-L, ZrO-M and Cast P < 0.001, TIT-L P = 0.044). CONCLUSIONS Optical and tactile scanners with CAD/CAM technology allow for the fabrication of highly accurate long-span screw-retained ZrO implant-reconstructions. Titanium frameworks showed the most consistent precision. Fit of the cast alloy frameworks was clinically inacceptable.
Resumo:
BACKGROUND Implant-overdentures supported by rigid bars provide stability in the edentulous atrophic mandible. However, fractures of solder joints and matrices, and loosening of screws and matrices were observed with soldered gold bars (G-bars). Computer-aided designed/computer-assisted manufactured (CAD/CAM) titanium bars (Ti-bars) may reduce technical complications due to enhanced material quality. PURPOSE To compare prosthetic-technical maintenance service of mandibular implant-overdentures supported by CAD/CAM Ti-bar and soldered G-bar. MATERIALS AND METHODS Edentulous patients were consecutively admitted for implant-prosthodontic treatment with a maxillary complete denture and a mandibular implant-overdenture connected to a rigid G-bar or Ti-bar. Maintenance service and problems with the implant-retention device complex and the prosthesis were recorded during minimally 3-4 years. Annual peri-implant crestal bone level changes (ΔBIC) were radiographically assessed. RESULTS Data of 213 edentulous patients (mean age 68 ± 10 years), who had received a total of 477 tapered implants, were available. Ti-bar and G-bar comprised 101 and 112 patients with 231 and 246 implants, respectively. Ti-bar mostly exhibited distal bar extensions (96%) compared to 34% of G-bar (p < .001). Fracture rate of bars extensions (4.7% vs 14.8%, p < .001) and matrices (1% vs 13%, p < .001) was lower for Ti-bar. Matrices activation was required 2.4× less often in Ti-bar. ΔBIC remained stable for both groups. CONCLUSIONS Implant overdentures supported by soldered gold bars or milled CAD/CAM Ti-bars are a successful treatment modality but require regular maintenance service. These short-term observations support the hypothesis that CAD/CAM Ti-bars reduce technical complications. Fracture location indicated that the titanium thickness around the screw-access hole should be increased.
Resumo:
Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.
Resumo:
In the last decade, a large number of software repositories have been created for different purposes. In this paper we present a survey of the publicly available repositories and classify the most common ones as well as discussing the problems faced by researchers when applying machine learning or statistical techniques to them.
Resumo:
All meta-analyses should include a heterogeneity analysis. Even so, it is not easy to decide whether a set of studies are homogeneous or heterogeneous because of the low statistical power of the statistics used (usually the Q test). Objective: Determine a set of rules enabling SE researchers to find out, based on the characteristics of the experiments to be aggregated, whether or not it is feasible to accurately detect heterogeneity. Method: Evaluate the statistical power of heterogeneity detection methods using a Monte Carlo simulation process. Results: The Q test is not powerful when the meta-analysis contains up to a total of about 200 experimental subjects and the effect size difference is less than 1. Conclusions: The Q test cannot be used as a decision-making criterion for meta-analysis in small sample settings like SE. Random effects models should be used instead of fixed effects models. Caution should be exercised when applying Q test-mediated decomposition into subgroups.
Resumo:
Quality assessment is one of the activities performed as part of systematic literature reviews. It is commonly accepted that a good quality experiment is bias free. Bias is considered to be related to internal validity (e.g., how adequately the experiment is planned, executed and analysed). Quality assessment is usually conducted using checklists and quality scales. It has not yet been proven;however, that quality is related to experimental bias. Aim: Identify whether there is a relationship between internal validity and bias in software engineering experiments. Method: We built a quality scale to determine the quality of the studies, which we applied to 28 experiments included in two systematic literature reviews. We proposed an objective indicator of experimental bias, which we applied to the same 28 experiments. Finally, we analysed the correlations between the quality scores and the proposed measure of bias. Results: We failed to find a relationship between the global quality score (resulting from the quality scale) and bias; however, we did identify interesting correlations between bias and some particular aspects of internal validity measured by the instrument. Conclusions: There is an empirically provable relationship between internal validity and bias. It is feasible to apply quality assessment in systematic literature reviews, subject to limits on the internal validity aspects for consideration.
Resumo:
In the world of information and communications technologies the demand for professionals with software engineering skills grows at an exponential rate. On this ground, we have conducted a study to help both academia and the software industry form a picture of the relationship between the competences of recent graduates of undergraduate and graduate software engineering programmes and the tasks that these professionals are to perform as part of their jobs in industry. Thanks to this study, academia will be able to observe which skills demanded by industry the software engineering curricula do or do not cater for, and industry will be able to ascertain which tasks a recent software engineering programme graduate is well qualified to perform. The study focuses on the software engineering knowledge guidelines provided in SE2004 and GSwE2009, and the job profiles identified by Career Space.
Resumo:
This research is concerned with the experimental software engineering area, specifically experiment replication. Replication has traditionally been viewed as a complex task in software engineering. This is possibly due to the present immaturity of the experimental paradigm applied to software development. Researchers usually use replication packages to replicate an experiment. However, replication packages are not the solution to all the information management problems that crop up when successive replications of an experiment accumulate. This research borrows ideas from the software configuration management and software product line paradigms to support the replication process. We believe that configuration management can help to manage and administer information from one replication to another: hypotheses, designs, data analysis, etc. The software product line paradigm can help to organize and manage any changes introduced into the experiment by each replication. We expect the union of the two paradigms in replication to improve the planning, design and execution of further replications and their alignment with existing replications. Additionally, this research work will contribute a web support environment for archiving information related to different experiment replications. Additionally, it will provide flexible enough information management support for running replications with different numbers and types of changes. Finally, it will afford massive storage of data from different replications. Experimenters working collaboratively on the same experiment must all have access to the different experiments.
Resumo:
There is no empirical evidence whatsoever to support most of the beliefs on which software construction is based. We do not yet know the adequacy, limits, qualities, costs and risks of the technologies used to develop software. Experimentation helps to check and convert beliefs and opinions into facts. This research is concerned with the replication area. Replication is a key component for gathering empirical evidence on software development that can be used in industry to build better software more efficiently. Replication has not been an easy thing to do in software engineering (SE) because the experimental paradigm applied to software development is still immature. Nowadays, a replication is executed mostly using a traditional replication package. But traditional replication packages do not appear, for some reason, to have been as effective as expected for transferring information among researchers in SE experimentation. The trouble spot appears to be the replication setup, caused by version management problems with materials, instruments, documents, etc. This has proved to be an obstacle to obtaining enough details about the experiment to be able to reproduce it as exactly as possible. We address the problem of information exchange among experimenters by developing a schema to characterize replications. We will adapt configuration management and product line ideas to support the experimentation process. This will enable researchers to make systematic decisions based on explicit knowledge rather than assumptions about replications. This research will output a replication support web environment. This environment will not only archive but also manage experimental materials flexibly enough to allow both similar and differentiated replications with massive experimental data storage. The platform should be accessible to several research groups working together on the same families of experiments.
Resumo:
Autonomous systems refer to systems capable of operating in a real world environment without any form of external control for extended periods of time. Autonomy is a desired goal for every system as it improves its performance, safety and profit. Ontologies are a way to conceptualize the knowledge of a specific domain. In this paper an ontology for the description of autonomous systems as well as for its development (engineering) is presented and applied to a process. This ontology is intended to be applied and used to generate final applications following a model driven methodology.
Resumo:
During the last century many researches on the business, marketing and technology fields have developed the innovation research line and large amount of knowledge can be found in the literature. Currently, the importance of systematic and openness approaches to manage the available innovation sources is well established in many knowledge fields. Also in the software engineering sector, where the organizations need to absorb and to exploit as much innovative ideas as possible to get success in the current competitive environment. This Master Thesis presents an study related with the innovation sources in the software engineering eld. The main research goals of this work are the identication and the relevance assessment of the available innovation sources and the understanding of the trends on the innovation sources usage. Firstly, a general review of the literature have been conducted in order to define the research area and to identify research gaps. Secondly, the Systematic Literature Review (SLR) has been proposed as the research method in this work to report reliable conclusions collecting systematically quality evidences about the innovation sources in software engineering field. This contribution provides resources, built-on empirical studies included in the SLR, to support a systematic identication and an adequate exploitation of the innovation sources most suitable in the software engineering field. Several artefacts such as lists, taxonomies and relevance assessments of the innovation sources most suitable for software engineering have been built, and their usage trends in the last decades and their particularities on some countries and knowledge fields, especially on the software engineering, have been researched. This work can facilitate to researchers, managers and practitioners of innovative software organizations the systematization of critical activities on innovation processes like the identication and exploitation of the most suitable opportunities. Innovation researchers can use the results of this work to conduct research studies involving the innovation sources research area. Whereas, organization managers and software practitioners can use the provided outcomes in a systematic way to improve their innovation capability, increasing consequently the value creation in the processes that they run to provide products and services useful to their environment. In summary, this Master Thesis research the innovation sources in the software engineering field, providing useful resources to support an effective innovation sources management. Moreover, several aspects should be deeply study to increase the accuracy of the presented results and to obtain more resources built-on empirical knowledge. It can be supported by the INno- vation SOurces MAnagement (InSoMa) framework, which is introduced in this work in order to encourage openness and systematic approaches to identify and to exploit the innovation sources in the software engineering field.
Resumo:
Experimental software engineering includes several processes, the most representative being run experiments, run replications and synthesize the results of multiple replications. Of these processes, only the first is relatively well established in software engineering. Problems of information management and communication among researchers are one of the obstacles to progress in the replication and synthesis processes. Software engineering experimentation has expanded considerably over the last few years. This has brought with it the invention of experimental process support proposals. However, few of these proposals provide integral support, including replication and synthesis processes. Most of the proposals focus on experiment execution. This paper proposes an infrastructure providing integral support for the experimental research process, specializing in the replication and synthesis of a family of experiments. The research has been divided into stages or phases, whose transition milestones are marked by the attainment of their goals. Each goal exactly matches an artifact or product. Within each stage, we will adopt cycles of successive approximations (generateand- test cycles), where each approximation includes a diferent viewpoint or input. Each cycle will end with the product approval.