889 resultados para User-based sesign


Relevância:

30.00% 30.00%

Publicador:

Resumo:

All public school districts, vocational centers, charter schools and special education cooperatives must submit the Annual Claim for Pupil Transportation Reimbursement (ISBE 50-23) electronically online through a web-based system named, "Pupil Transportation Claim Reimbursement System" or "PTCRS."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mountain ranges and coastlines of Washington State have steep slopes, and they are susceptible to landslides triggered by intense rainstorms, rapid snow melts, earthquakes, and rivers and waves removing slope stability. Over a 30-year timespan (1984-2014 and includes State Route (SR) 530), a total of 28 deep-seated landslides caused 300 million dollars of damage and 45 deaths (DGER, 2015). During that same timeframe, ten storm events triggered shallow landslides and debris flows across the state, resulting in nine deaths (DGER, 2015). The loss of 43 people, due to the SR 530 complex reactivating and moving at a rate and distance unexpected to residents, highlighted the need for an inventory of the stateís landslides. With only 13% of the state mapped (Lombardo et al., 2015), the intention of this statewide inventory is to communicate hazards to citizens and decision makers. In order to compile an accurate and consistent landslide inventory, Washington needs to adopt a graphic information system (GIS) based mapping protocol. A mapping protocol provides consistency for measuring and recording information about landslides, including such information as the type of landslide, the material involved, and the size of the movement. The state of Oregon shares similar landslide problems as Washington, and it created a GIS-based mapping protocol designed to inform its residents, while also saving money and reducing costly hours in the field (Burns and Madin, 2009). In order to determine if the Oregon Department of Geology and Mineral Industries (DOGAMI) protocol, developed by Burns and Madin (2009), could serve as the basis for establishing Washingtonís protocol, I used the office-based DOGAMI protocol to map landslides along a 40-50 km (25-30 mile) shoreline in Thurston County, Washington. I then compared my results to the field-based landslide inventory created in 2009 by the Washington Division of Geology and Earth Resources (DGER) along this same shoreline. If the landslide area I mapped reasonably equaled the area of the DGER (2009) inventory, I would consider the DOGAMI protocol useful for Washington, too. Utilizing 1m resolution lidar flown for Thurston County in 2011 and a GIS platform, I mapped 36 landslide deposits and scarp flanks, covering a total area of 879,530 m2 (9,467,160 ft2). I also found 48 recent events within these deposits. With an exception of two slides, all of the movements occurred within the last fifty years. Along this same coastline, the DGER (2009) recorded 159 individual landslides and complexes, for a total area of 3,256,570 m2 (35,053,400 ft2). At a first glance it appears the DGER (2009) effort found a larger total number and total area of landslides. However, in addition to their field inventory, they digitized landslides previously mapped by other researchers, and they did not field confirm these landslides, which cover a total area of 2,093,860 m2 (22,538,150 ft2) (DGER, 2009). With this questionable landslide area removed and the toes and underwater landslides accounted for because I did not have a bathymetry dataset, my results are within 6,580 m2 (70,840 ft2) of the DGERís results. This similarity shows that the DOGAMI protocol provides a consistent and accurate approach to creating a landslide inventory. With a few additional modifications, I recommend that Washington State adopts the DOGAMI protocol. Acquiring additional 1m lidar and adopting a modified DOGAMI protocol poises the DGER to map the remaining 87% of the state, with an ultimate goal of informing citizens and decision makers of the locations and frequencies of landslide hazards on a user-friendly GIS platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

User requirements of multimedia authentication are various. In some cases, the user requires an authentication system to monitor a set of specific areas with respective sensitivity while neglecting other modification. Most current existing fragile watermarking schemes are mixed systems, which can not satisfy accurate user requirements. Therefore, in this paper we designed a sensor-based multimedia authentication architecture. This system consists of sensor combinations and a fuzzy response logic system. A sensor is designed to strictly respond to given area tampering of a certain type. With this scheme, any complicated authentication requirement can be satisfied, and many problems such as error tolerant tamper method detection will be easily resolved. We also provided experiments to demonstrate the implementation of the sensor-based system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite decades of research, the takeup of formal methods for developing provably correct software in industry remains slow. One reason for this is the high cost of proof construction, an activity that, due to the complexity of the required proofs, is typically carried out using interactive theorem provers. In this paper we propose an agent-oriented architecture for interactive theorem proving with the aim of reducing the user interactions (and thus the cost) of constructing software verification proofs. We describe a prototype implementation of our architecture and discuss its application to a small, but non-trivial case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web transaction data between Web visitors and Web functionalities usually convey user task-oriented behavior pattern. Mining such type of click-stream data will lead to capture usage pattern information. Nowadays Web usage mining technique has become one of most widely used methods for Web recommendation, which customizes Web content to user-preferred style. Traditional techniques of Web usage mining, such as Web user session or Web page clustering, association rule and frequent navigational path mining can only discover usage pattern explicitly. They, however, cannot reveal the underlying navigational activities and identify the latent relationships that are associated with the patterns among Web users as well as Web pages. In this work, we propose a Web recommendation framework incorporating Web usage mining technique based on Probabilistic Latent Semantic Analysis (PLSA) model. The main advantages of this method are, not only to discover usage-based access pattern, but also to reveal the underlying latent factor as well. With the discovered user access pattern, we then present user more interested content via collaborative recommendation. To validate the effectiveness of proposed approach, we conduct experiments on real world datasets and make comparisons with some existing traditional techniques. The preliminary experimental results demonstrate the usability of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collaborative recommendation is one of widely used recommendation systems, which recommend items to visitor on a basis of referring other's preference that is similar to current user. User profiling technique upon Web transaction data is able to capture such informative knowledge of user task or interest. With the discovered usage pattern information, it is likely to recommend Web users more preferred content or customize the Web presentation to visitors via collaborative recommendation. In addition, it is helpful to identify the underlying relationships among Web users, items as well as latent tasks during Web mining period. In this paper, we propose a Web recommendation framework based on user profiling technique. In this approach, we employ Probabilistic Latent Semantic Analysis (PLSA) to model the co-occurrence activities and develop a modified k-means clustering algorithm to build user profiles as the representatives of usage patterns. Moreover, the hidden task model is derived by characterizing the meaningful latent factor space. With the discovered user profiles, we then choose the most matched profile, which possesses the closely similar preference to current user and make collaborative recommendation based on the corresponding page weights appeared in the selected user profile. The preliminary experimental results performed on real world data sets show that the proposed approach is capable of making recommendation accurately and efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pervasive computing applications must be sufficiently autonomous to adapt their behaviour to changes in computing resources and user requirements. This capability is known as context-awareness. In some cases, context-aware applications must be implemented as autonomic systems which are capable of dynamically discovering and replacing context sources (sensors) at run-time. Unlike other types of application autonomy, this kind of dynamic reconfiguration has not been sufficiently investigated yet by the research community. However, application-level context models are becoming common, in order to ease programming of context-aware applications and support evolution by decoupling applications from context sources. We can leverage these context models to develop general (i.e., application-independent) solutions for dynamic, run-time discovery of context sources (i.e., context management). This paper presents a model and architecture for a reconfigurable context management system that supports interoperability by building on emerging standards for sensor description and classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current image database metadata schemas require users to adopt a specific text-based vocabulary. Text-based metadata is good for searching but not for browsing. Existing image-based search facilities, on the other hand, are highly specialised and so suffer similar problems. Wexelblat's semantic dimensional spatial visualisation schemas go some way towards addressing this problem by making both searching and browsing more accessible to the user in a single interface. But the question of how and what initial metadata to enter a database remains. Different people see different things in an image and will organise a collection in equally diverse ways. However, we can find some similarity across groups of users regardless of their reasoning. For example, a search on Amazon.com returns other products also, based on an averaging of how users navigate the database. In this paper, we report on applying this concept to a set of images for which we have visualised them using traditional methods and the Amazon.com method. We report on the findings of this comparative investigation in a case study setting involving a group of randomly selected participants. We conclude with the recommendation that in combination, the traditional and averaging methods would provide an enhancement to current database visualisation, searching, and browsing facilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an architecture for pervasive computing which utilizes context information to provide adaptations based on vertical handovers (handovers between heterogeneous networks) while supporting application Quality of Service (QoS). The future of mobile computing will see an increase in ubiquitous network connectivity which allows users to roam freely between heterogeneous networks. One of the requirements for pervasive computing is to adapt computing applications or their environment if current applications can no longer be provided with the requested QoS. One of possible adaptations is a vertical handover to a different network. Vertical handover operations include changing network interfaces on a single device or changes between different devices. Such handovers should be performed with minimal user distraction and minimal violation of communication QoS for user applications. The solution utilises context information regarding user devices, user location, application requirements, and network environment. The paper shows how vertical handover adaptations are incorporated into the whole infrastructure of a pervasive system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work was to provide professional and amateur writers with a new way of enhancing their productivity and mental well-being, by helping them overcoming writers block and being able to achieve a state of optimal experience while writing. Our approach is based on bringing together different components to create what we call a creative moment. A creative moment is composed by an image, a text, a mood, a location and a color. The color presented in the creative moment varied according to the mood that was associated to the creative moment. With the creative moments we hoped that our users could have a way to easily trigger their creativity and have a kick start in their work. The prototyping of a web crowdsourcing platform, named CreativeWall, and a Microsoft Word Add-In, that was used on the user study performed, is described and their implementations are discussed. The user study reveals that our approach does have a positive influence in the productivity of the participants when compared with another existing approach. The study also revealed that our approach can ease the process of achieving a state of optimal experience by enhancing one of the dimensions presented on the Flow Theory. At the end we present what we consider would be some possible future developments for the concept created during the development of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic ontology building is a vital issue in many fields where they are currently built manually. This paper presents a user-centred methodology for ontology construction based on the use of Machine Learning and Natural Language Processing. In our approach, the user selects a corpus of texts and sketches a preliminary ontology (or selects an existing one) for a domain with a preliminary vocabulary associated to the elements in the ontology (lexicalisations). Examples of sentences involving such lexicalisation (e.g. ISA relation) in the corpus are automatically retrieved by the system. Retrieved examples are validated by the user and used by an adaptive Information Extraction system to generate patterns that discover other lexicalisations of the same objects in the ontology, possibly identifying new concepts or relations. New instances are added to the existing ontology or used to tune it. This process is repeated until a satisfactory ontology is obtained. The methodology largely automates the ontology construction process and the output is an ontology with an associated trained leaner to be used for further ontology modifications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes research on End-User Computing (EUC) in small business in an environment where no Information System (IS) support and expertise are available. The research aims to identify the factors that contribute to EUC Sophistication and understand the extent small firms are capable of developing their own applications. The intention is to assist small firms to adopt EUC, encourage better utilisation of their IT resources and gain the benefits associated with computerisation. The factors examined are derived inductively from previous studies where a model is developed to map these factors with the degree of sophistication associated with IT and EUC. This study attempts to combine the predictive power of quantitative research through surveys with the explanatory power of qualitative research through action-oriented case study. Following critical examination of the literature, a survey of IT Adoption and EUC was conducted. Instruments were then developed to measure EUC and IT Sophistication indexes based on sophistication constructs adapted from previous studies using data from the survey. This is followed by an in-depth action case study involving two small firms to investigate the EUC phenomenon in its real life context. The accumulated findings from these mixed research strategies are used to form the final model of EUC Sophistication in small business. Results of the study suggest both EUC Sophistication and the Presence of EUC in small business are affected by Management Support and Behaviour towards EUC. Additionally EUC Sophistication is also affected by the presence of an EUC Champion. Results are also consistent with respect to the independence between IT Sophistication and EUC Sophistication. The main research contributions include an accumulated knowledge of EUC in small business, the Model of EUC Sophistication, an instrument to measure EUC Sophistication Index for small firms, and a contribution to research methods in IS.