948 resultados para Framework (Computer file)
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.
Resumo:
In the medical field images obtained from high definition cameras and other medical imaging systems are an integral part of medical diagnosis. The analysis of these images are usually performed by the physicians who sometimes need to spend long hours reviewing the images before they are able to come up with a diagnosis and then decide on the course of action. In this dissertation we present a framework for a computer-aided analysis of medical imagery via the use of an expert system. While this problem has been discussed before, we will consider a system based on mobile devices. Since the release of the iPhone on April 2003, the popularity of mobile devices has increased rapidly and our lives have become more reliant on them. This popularity and the ease of development of mobile applications has now made it possible to perform on these devices many of the image analyses that previously required a personal computer. All of this has opened the door to a whole new set of possibilities and freed the physicians from their reliance on their desktop machines. The approach proposed in this dissertation aims to capitalize on these new found opportunities by providing a framework for analysis of medical images that physicians can utilize from their mobile devices thus remove their reliance on desktop computers. We also provide an expert system to aid in the analysis and advice on the selection of medical procedure. Finally, we also allow for other mobile applications to be developed by providing a generic mobile application development framework that allows for access of other applications into the mobile domain. In this dissertation we outline our work leading towards development of the proposed methodology and the remaining work needed to find a solution to the problem. In order to make this difficult problem tractable, we divide the problem into three parts: the development user interface modeling language and tooling, the creation of a game development modeling language and tooling, and the development of a generic mobile application framework. In order to make this problem more manageable, we will narrow down the initial scope to the hair transplant, and glaucoma domains.
Resumo:
The study was developed as a teacher-research project during initial teacher education – Masters Degree of Early Childhood and Primary Education, in Portugal. It analysed the interactions between children of 3 to 6 years old, during the use of the computer as a free choice activity, confronting situations between peers of the same age and situations between peers of different ages. The focus of the analysis was the collaborative interactions. This was a qualitative study. Children could choose the computer, amongst other interest areas, and work for around an hour in pairs. In the computer, children used mainly educational games. During four weeks, the interactions between the pairs were audio recorded. Field notes and informal interviews to the children were also used to collect data. Eleven children were involved in the study with ages ranging from 3 to 6 years old. Baseline data on children’s basic computer proficiency was collected using the Individualized Computer Proficiency Checklist (ICPC) by Hyun. The recorded interactions were analysed using the types of talk offered by Scrimshaw and Perkins and Wegerif and Scrimshaw: cumulative talk, exploratory talk, disputational talk, and tutorial talk. This framework was already used in a study in an early childhood education context in Portugal by Amante. The results reveal differences in computer use and characterize the observed interactions. Seven different pairs of children's interactions were analysed. More than a third of the interactions were cumulative talk, followed by exploratory talk, tutorial talk and disputational talk. Comparing same and mixed age pairs, we observed that cumulative talk is the more present interaction, but in same age pairs this is followed by exploratory talk whereas in the mixed age pairs it is tutorial talk that has the second largest percentage. The pairs formed by the children were very asymmetrical in terms of age and computer proficiency. This lead to the more tutorial interactions, where one children showed the other or directed him/her on how to play. The results show that collaboration is present during the use of a computer area in early childhood education. The free choice of the children means the adults can only suggest pairing suited to specific interactions between the children. Another way to support children in more exploratory talk interactions could be by discussing the way the older children can help the younger ones beyond directing or correcting their work.
Resumo:
User Quality of Experience (QoE) is a subjective entity and difficult to measure. One important aspect of it, User Experience (UX), corresponds to the sensory and emotional state of a user. For a user interacting through a User Interface (UI), precise information on how they are using the UI can contribute to understanding their UX, and thereby understanding their QoE. As well as a user’s use of the UI such as clicking, scrolling, touching, or selecting, other real-time digital information about the user such as from smart phone sensors (e.g. accelerometer, light level) and physiological sensors (e.g. heart rate, ECG, EEG) could contribute to understanding UX. Baran is a framework that is designed to capture, record, manage and analyse the User Digital Imprint (UDI) which, is the data structure containing all user context information. Baran simplifies the process of collecting experimental information in Human and Computer Interaction (HCI) studies, by recording comprehensive real-time data for any UI experiment, and making the data available as a standard UDI data structure. This paper presents an overview of the Baran framework, and provides an example of its use to record user interaction and perform some basic analysis of the interaction.
Resumo:
Predicting user behaviour enables user assistant services provide personalized services to the users. This requires a comprehensive user model that can be created by monitoring user interactions and activities. BaranC is a framework that performs user interface (UI) monitoring (and collects all associated context data), builds a user model, and supports services that make use of the user model. A prediction service, Next-App, is built to demonstrate the use of the framework and to evaluate the usefulness of such a prediction service. Next-App analyses a user's data, learns patterns, makes a model for a user, and finally predicts, based on the user model and current context, what application(s) the user is likely to want to use. The prediction is pro-active and dynamic, reflecting the current context, and is also dynamic in that it responds to changes in the user model, as might occur over time as a user's habits change. Initial evaluation of Next-App indicates a high-level of satisfaction with the service.
Resumo:
A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.
Resumo:
The authors present a proposal to develop intelligent assisted living environments for home based healthcare. These environments unite the chronical patient clinical history sematic representation with the ability of monitoring the living conditions and events recurring to a fully managed Semantic Web of Things (SWoT). Several levels of acquired knowledge and the case based reasoning that is possible by knowledge representation of the health-disease history and acquisition of the scientific evidence will deliver, through various voice based natural interfaces, the adequate support systems for disease auto management but prominently by activating the less differentiated caregiver for any specific need. With these capabilities at hand, home based healthcare providing becomes a viable possibility reducing the institutionalization needs. The resulting integrated healthcare framework will provide significant savings while improving the generality of health and satisfaction indicators.
Design and Development of a Research Framework for Prototyping Control Tower Augmented Reality Tools
Resumo:
The purpose of the air traffic management system is to ensure the safe and efficient flow of air traffic. Therefore, while augmenting efficiency, throughput and capacity in airport operations, attention has rightly been placed on doing it in a safe manner. In the control tower, many advances in operational safety have come in the form of visualization tools for tower controllers. However, there is a paradox in developing such systems to increase controllers' situational awareness: by creating additional computer displays, the controller's vision is pulled away from the outside view and the time spent looking down at the monitors is increased. This reduces their situational awareness by forcing them to mentally and physically switch between the head-down equipment and the outside view. This research is based on the idea that augmented reality may be able to address this issue. The augmented reality concept has become increasingly popular over the past decade and is being proficiently used in many fields, such as entertainment, cultural heritage, aviation, military & defense. This know-how could be transferred to air traffic control with a relatively low effort and substantial benefits for controllers’ situation awareness. Research on this topic is consistent with SESAR objectives of increasing air traffic controllers’ situation awareness and enable up to 10 % of additional flights at congested airports while still increasing safety and efficiency. During the Ph.D., a research framework for prototyping augmented reality tools was set up. This framework consists of methodological tools for designing the augmented reality overlays, as well as of hardware and software equipment to test them. Several overlays have been designed and implemented in a simulated tower environment, which is a virtual reconstruction of Bologna airport control tower. The positive impact of such tools was preliminary assessed by means of the proposed methodology.
Resumo:
The issues influencing student engagement with high-stakes computer-based exams were investigated, drawing on feedback from two cohorts of international MA Education students encountering this assessment method for the first time. Qualitative data from surveys and focus groups on the students’ examination experience were analysed, leading to the identification of engagement issues in the delivery of high-stakes computer-based assessments.The exam combined short-answer open-response questions with multiple-choice-style items to assess knowledge and understanding of research methods. The findings suggest that engagement with computer-based testing depends, to a lesser extent, on students’ general levels of digital literacy and, to a greater extent, on their information technology (IT) proficiency for assessment and their ability to adapt their test-taking strategies, including organisational and cognitive strategies, to the online assessment environment. The socialisation and preparation of students for computer-based testing therefore emerge as key responsibilities for instructors to address, with students requesting increased opportunities for practice and training to develop the IT skills and test-taking strategies necessary to succeed in computer-based examinations. These findings and their implications in terms of instructional responsibilities form the basis of a proposal for a framework for Learner Engagement with e-Assessment Practices.
Resumo:
The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.
Resumo:
Technological advancement has undergone exponential growth in recent years, and this has brought significant improvements in the computational capabilities of computers, which can now perform an enormous amount of calculations per second. Taking advantage of these improvements has made it possible to devise algorithms that are very demanding in terms of the computational resources needed to develop architectures capable of solving the most complex problems: currently the most powerful of these are neural networks and in this thesis I will combine these tecniques with classical computer vision algorithms to improve the speed and accuracy of maintenance in photovoltaic facilities.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
To evaluate the effectiveness of Reciproc for the removal of cultivable bacteria and endotoxins from root canals in comparison with multifile rotary systems. The root canals of forty human single-rooted mandibular pre-molars were contaminated with an Escherichia coli suspension for 21 days and randomly assigned to four groups according to the instrumentation system: GI - Reciproc (VDW); GII - Mtwo (VDW); GIII - ProTaper Universal (Dentsply Maillefer); and GIV -FKG Race(™) (FKG Dentaire) (n = 10 per group). Bacterial and endotoxin samples were taken with a sterile/apyrogenic paper point before (s1) and after instrumentation (s2). Culture techniques determined the colony-forming units (CFU) and the Limulus Amebocyte Lysate assay was used for endotoxin quantification. Results were submitted to paired t-test and anova. At s1, bacteria and endotoxins were recovered in 100% of the root canals investigated (40/40). After instrumentation, all systems were associated with a highly significant reduction of the bacterial load and endotoxin levels, respectively: GI - Reciproc (99.34% and 91.69%); GII - Mtwo (99.86% and 83.11%); GIII - ProTaper (99.93% and 78.56%) and GIV - FKG Race(™) (99.99% and 82.52%) (P < 0.001). No statistical difference were found amongst the instrumentation systems regarding bacteria and endotoxin removal (P > 0.01). The reciprocating single file, Reciproc, was as effective as the multifile rotary systems for the removal of bacteria and endotoxins from root canals.
Resumo:
Resource specialisation, although a fundamental component of ecological theory, is employed in disparate ways. Most definitions derive from simple counts of resource species. We build on recent advances in ecophylogenetics and null model analysis to propose a concept of specialisation that comprises affinities among resources as well as their co-occurrence with consumers. In the distance-based specialisation index (DSI), specialisation is measured as relatedness (phylogenetic or otherwise) of resources, scaled by the null expectation of random use of locally available resources. Thus, specialists use significantly clustered sets of resources, whereas generalists use over-dispersed resources. Intermediate species are classed as indiscriminate consumers. The effectiveness of this approach was assessed with differentially restricted null models, applied to a data set of 168 herbivorous insect species and their hosts. Incorporation of plant relatedness and relative abundance greatly improved specialisation measures compared to taxon counts or simpler null models, which overestimate the fraction of specialists, a problem compounded by insufficient sampling effort. This framework disambiguates the concept of specialisation with an explicit measure applicable to any mode of affinity among resource classes, and is also linked to ecological and evolutionary processes. This will enable a more rigorous deployment of ecological specialisation in empirical and theoretical studies.
Resumo:
This study investigated the influence of cervical preflaring with different rotary instruments on determination of the initial apical file (IAF) in mesiobuccal roots of mandibular molars. Fifty human mandibular molars whose mesial roots presented two clearly separated apical foramens (mesiobuccal and mesiolingual) were used. After standard access opening and removal of pulp tissue, the working length (WL) was determined at 1 mm short of the root apex. Five groups (n=10) were formed at random, according to the type of instrument used for cervical preflaring. In group 1, the size of the IAF was determined without preflaring of the cervical and middle root canal thirds. In groups 2 to 5, preflaring was performed with Gates-Glidden drills, ProTaper instruments, EndoFlare instruments and LA Axxes burs, respectively. Canals were sized manually with K-files, starting with size 08 K-files, inserted passively up to the WL. File sizes were increased until a binding sensation was felt at the WL and the size of the file was recorded. The instrument corresponding to the IAF was fixed into the canal at the WL with methylcyanoacrylate. The teeth were then sectioned transversally 1 mm short of the apex, with the IAF in position. Cross-sections of the WL region were examined under scanning electron microscopy and the discrepancies between canal diameter and the diameter of IAF were calculated using the tool "rule" (FEG) of the microscope's proprietary software. The measurements (µm) were analyzed statistically by Kruskal-Wallis and Dunn's tests at 5% significance level. There were statistically significant differences among the groups (p<0.05). The non-flared group had the greatest discrepancy (125.30 ± 51.54) and differed significantly from all flared groups (p<0.05). Cervical preflaring with LA Axxess burs produced the least discrepancies (55.10 ± 48.31), followed by EndoFlare instruments (68.20 ± 42.44), Gattes Glidden drills (68.90 ± 42.46) and ProTaper files (77.40 ± 73.19). However, no significant differences (p>0.05) were found among the rotary instruments. In conclusion, cervical preflaring improved IAF fitting to the canals at the WL in mesiobuccal roots of maxillary first molars. The rotary instruments evaluated in this study did not differ from each other regarding the discrepancies produced between the IAF size and canal diameter at the WL.