987 resultados para MODEL REUSE
Resumo:
In today’s world of information-driven society, many studies are exploring usefulness and ease of use of the technology. The research into personalizing next-generation user interface is also ever increasing. A better understanding of factors that influence users’ perception of web search engine performance would contribute in achieving this. This study measures and examines how users’ perceived level of prior knowledge and experience influence their perceived level of satisfaction of using the web search engines, and how their perceived level of satisfaction affects their perceived intention to reuse the system. 50 participants from an Australian university participated in the current study, where they performed three search tasks and completed survey questionnaires. A research model was constructed to test the proposed hypotheses. Correlation and regression analyses results indicated a significant correlation between (1) users’ prior level of experience and their perceived level of satisfaction in using the web search engines, and (2) their perceived level of satisfaction in using the systems and their perceived intention to reuse the systems. A theoretical model is proposed to illustrate the causal relationships. The implications and limitations of the study are also discussed.
Resumo:
A two-dimensional variable-order fractional nonlinear reaction-diffusion model is considered. A second-order spatial accurate semi-implicit alternating direction method for a two-dimensional variable-order fractional nonlinear reaction-diffusion model is proposed. Stability and convergence of the semi-implicit alternating direct method are established. Finally, some numerical examples are given to support our theoretical analysis. These numerical techniques can be used to simulate a two-dimensional variable order fractional FitzHugh-Nagumo model in a rectangular domain. This type of model can be used to describe how electrical currents flow through the heart, controlling its contractions, and are used to ascertain the effects of certain drugs designed to treat arrhythmia.
Resumo:
In vegetated environments, reliable obstacle detection remains a challenge for state-of-the-art methods, which are usually based on geometrical representations of the environment built from LIDAR and/or visual data. In many cases, in practice field robots could safely traverse through vegetation, thereby avoiding costly detours. However, it is often mistakenly interpreted as an obstacle. Classifying vegetation is insufficient since there might be an obstacle hidden behind or within it. Some Ultra-wide band (UWB) radars can penetrate through vegetation to help distinguish actual obstacles from obstacle-free vegetation. However, these sensors provide noisy and low-accuracy data. Therefore, in this work we address the problem of reliable traversability estimation in vegetation by augmenting LIDAR-based traversability mapping with UWB radar data. A sensor model is learned from experimental data using a support vector machine to convert the radar data into occupancy probabilities. These are then fused with LIDAR-based traversability data. The resulting augmented traversability maps capture the fine resolution of LIDAR-based maps but clear safely traversable foliage from being interpreted as obstacle. We validate the approach experimentally using sensors mounted on two different mobile robots, navigating in two different environments.
Resumo:
With the ever increasing amount of eHealth data available from various eHealth systems and sources, Health Big Data Analytics promises enticing benefits such as enabling the discovery of new treatment options and improved decision making. However, concerns over the privacy of information have hindered the aggregation of this information. To address these concerns, we propose the use of Information Accountability protocols to provide patients with the ability to decide how and when their data can be shared and aggregated for use in big data research. In this paper, we discuss the issues surrounding Health Big Data Analytics and propose a consent-based model to address privacy concerns to aid in achieving the promised benefits of Big Data in eHealth.
Resumo:
We consider a dense ad hoc wireless network comprising n nodes confined to a given two dimensional region of fixed area. For the Gupta-Kumar random traffic model and a realistic interference and path loss model (i.e., the channel power gains are bounded above, and are bounded below by a strictly positive number), we study the scaling of the aggregate end-to-end throughput with respect to the network average power constraint, P macr, and the number of nodes, n. The network power constraint P macr is related to the per node power constraint, P macr, as P macr = np. For large P, we show that the throughput saturates as Theta(log(P macr)), irrespective of the number of nodes in the network. For moderate P, which can accommodate spatial reuse to improve end-to-end throughput, we observe that the amount of spatial reuse feasible in the network is limited by the diameter of the network. In fact, we observe that the end-to-end path loss in the network and the amount of spatial reuse feasible in the network are inversely proportional. This puts a restriction on the gains achievable using the cooperative communication techniques studied in and, as these rely on direct long distance communication over the network.
Resumo:
Reuse is at the heart of major improvements in productivity and quality in Software Engineering. Both Model Driven Engineering (MDE) and Software Product Line Engineering (SPLE) are software development paradigms that promote reuse. Specifically, they promote systematic reuse and a departure from craftsmanship towards an industrialization of the software development process. MDE and SPLE have established their benefits separately. Their combination, here called Model Driven Product Line Engineering (MDPLE), gathers together the advantages of both. Nevertheless, this blending requires MDE to be recasted in SPLE terms. This has implications on both the core assets and the software development process. The challenges are twofold: (i) models become central core assets from which products are obtained and (ii) the software development process needs to cater for the changes that SPLE and MDE introduce. This dissertation proposes a solution to the first challenge following a feature oriented approach, with an emphasis on reuse and early detection of inconsistencies. The second part is dedicated to assembly processes, a clear example of the complexity MDPLE introduces in software development processes. This work advocates for a new discipline inside the general software development process, i.e., the Assembly Plan Management, which raises the abstraction level and increases reuse in such processes. Different case studies illustrate the presented ideas.
Resumo:
Pile reuse has become an increasingly popular option in foundation design, mainly due to its potential cost and environmental benefits and the problem of underground congestion in urban areas. However, key geotechnical concerns remain regarding the behavior of reused piles and the modeling of foundation systems involving old and new piles to support building loads of the new structure. In this paper, a design and analysis tool for pile reuse projects will be introduced. The tool allows coupling of superstructure stiffness with the foundation model, and includes an optimization algorithm to obtain the best configuration of new piles to work alongside reused piles. Under the concept of Pareto Optimality, multi-objective optimization analyses can also reveal the relationship between material usage and the corresponding foundation performance, providing a series of reuse options at various foundation costs. The components of this analysis tool will be discussed and illustrated through a case history in London, where 110 existing piles are reused at a site to support the proposed new development. The case history reveals the difficulties faced by foundation reuse in urban areas and demonstrates the application of the design tool to tackle these challenges. © ASCE 2011.
Resumo:
Engineering companies face many challenges today such as increased competition, higher expectations from consumers and decreasing product lifecycle times. This means that product development times must be reduced to meet these challenges. Concurrent engineering, reuse of engineering knowledge and the use of advanced methods and tools are among the ways of reducing product development times. Concurrent engineering is crucial in making sure that the products are designed with all issues considered simultaneously. The reuse of engineering knowledge allows existing solutions to be reused. It can also help to avoid the mistakes made in previous designs. Computer-based tools are used to store information, automate tasks, distribute work, perform simulation and so forth. This research concerns the evaluation of tools that can be used to support the design process. These tools are evaluated in terms of the capture of information generated during the design process. This information is vital to allow the reuse of knowledge. Present CAD systems store only information on the final definition of the product such as geometry, materials and manufacturing processes. Product Data Management (PDM) systems can manage all this CAD information along with other product related information. The research includes the evaluation of two PDM systems, Windchill and Metaphase, using the design of a single-handed water tap as a case study. The two PDMs were then compared to PROSUS/DDM. PROSUS is the Process-Based Support System proposed by [Blessing 94] using the same case study. The Design Data Model is the product data model that includes PROSUS. The results look promising. PROSUS/DDM is able to capture most design information and structure and present it logically. The design process and product information is related and stored within the DDM structure. The PDMs can capture most design information, but information from early stages of design is stored only as unstructured documentation. Some problems were found with PROSUS/DDM. A proposal is made that may make it possible to resolve these problems, but this will require further research.
Resumo:
This paper describes a novel approach to the analysis of supply and demand of water in California. A stochastic model is developed to assess the future supply of and demand for water resources in California. The results are presented in the form of a Sankey diagram where present and stochastically-varying future fluxes of water in California and its sub-regions are traced from source to services by mapping the various transformations of water from when it is first made available for use, through its treatment, recycling and reuse, to its eventual loss in a variety of sinks. This helps to highlight the connections of water with energy and land resources, including the amount of energy used to pump and treat water, the amount of water used for energy production, and the land resources that create a water demand to produce crops for food. By mapping water in this way, policy-makers can more easily understand the competing uses of water, through the identification of the services it delivers (e.g. sanitation, food production, landscaping), the potential opportunities for improving themanagement of the resource and the connections with other resources which are often overlooked in a traditional sector-based management strategy. This paper focuses on a Sankey diagram for water, but the ultimate aim is the visualisation of linked resource futures through inter-connected Sankey diagrams for energy, land and water, tracking changes from the basic resources for all three, their transformations, and the final services they provide.
Resumo:
Program design is an area of programming that can benefit significantly from machine-mediated assistance. A proposed tool, called the Design Apprentice (DA), can assist a programmer in the detailed design of programs. The DA supports software reuse through a library of commonly-used algorithmic fragments, or cliches, that codifies standard programming. The cliche library enables the programmer to describe the design of a program concisely. The DA can detect some kinds of inconsistencies and incompleteness in program descriptions. It automates detailed design by automatically selecting appropriate algorithms and data structures. It supports the evolution of program designs by keeping explicit dependencies between the design decisions made. These capabilities of the DA are underlaid bya model of programming, called programming by successive elaboration, which mimics the way programmers interact. Programming by successive elaboration is characterized by the use of breadth-first exposition of layered program descriptions and the successive modifications of descriptions. A scenario is presented to illustrate the concept of the DA. Technques for automating the detailed design process are described. A framework is given in which designs are incrementally augmented and modified by a succession of design steps. A library of cliches and a suite of design steps needed to support the scenario are presented.
Resumo:
M.H. Lee, On Models, Modelling and the Distinctive Nature of Model-Based Reasoning, AI Communications, 12 (3), pp127-137.1999.
Resumo:
Individuals with elevated levels of plasma low density lipoprotein (LDL) cholesterol (LDL-C) are considered to be at risk of developing coronary heart disease. LDL particles are removed from the blood by a process known as receptor-mediated endocytosis, which occurs mainly in the liver. A series of classical experiments delineated the major steps in the endocytotic process; apolipoprotein B-100 present on LDL particles binds to a specific receptor (LDL receptor, LDL-R) in specialized areas of the cell surface called clathrin-coated pits. The pit comprising the LDL-LDL-R complex is internalized forming a cytoplasmic endosome. Fusion of the endosome with a lysosome leads to degradation of the LDL into its constituent parts (that is, cholesterol, fatty acids, and amino acids), which are released for reuse by the cell, or are excreted. In this paper, we formulate a mathematical model of LDL endocytosis, consisting of a system of ordinary differential equations. We validate our model against existing in vitro experimental data, and we use it to explore differences in system behavior when a single bolus of extracellular LDL is supplied to cells, compared to when a continuous supply of LDL particles is available. Whereas the former situation is common to in vitro experimental systems, the latter better reflects the in vivo situation. We use asymptotic analysis and numerical simulations to study the longtime behavior of model solutions. The implications of model-derived insights for experimental design are discussed.
Resumo:
The purpose of this study was to develop an understanding of the current state of scientific data sharing that stakeholders could use to develop and implement effective data sharing strategies and policies. The study developed a conceptual model to describe the process of data sharing, and the drivers, barriers, and enablers that determine stakeholder engagement. The conceptual model was used as a framework to structure discussions and interviews with key members of all stakeholder groups. Analysis of data obtained from interviewees identified a number of themes that highlight key requirements for the development of a mature data sharing culture.
Resumo:
Scientific workflows are becoming a valuable tool for scientists to capture and automate e-Science procedures. Their success brings the opportunity to publish, share, reuse and repurpose this explicitly captured knowledge. Within the myGrid project, we have identified key resources that can be shared including complete workflows, fragments of workflows and constituent services. We have examined the alternative ways these can be described by their authors (and subsequent users), and developed a unified descriptive model to support their later discovery. By basing this model on existing standards, we have been able to extend existing Web Service and Semantic Web Service infrastructure whilst still supporting the specific needs of the e-Scientist. myGrid components enable a workflow life-cycle that extends beyond execution, to include discovery of previous relevant designs, reuse of those designs, and subsequent publication. Experience with example groups of scientists indicates that this cycle is valuable. The growing number of workflows and services mean more work is needed to support the user in effective ranking of search results, and to support the repurposing process.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.