889 resultados para User-based sesign


Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’évaluation de l’action humanitaire (ÉAH) est un outil valorisé pour soutenir l’imputabilité, la transparence et l’efficience de programmes humanitaires contribuant à diminuer les inéquités et à promouvoir la santé mondiale. L’EAH est incontournable pour les parties prenantes de programme, les bailleurs de fonds, décideurs et intervenants souhaitant intégrer les données probantes aux pratiques et à la prise de décisions. Cependant, l’utilisation de l’évaluation (UÉ) reste incertaine, l’ÉAH étant fréquemment menée, mais inutilisé. Aussi, les conditions influençant l’UÉ varient selon les contextes et leur présence et applicabilité au sein d’organisations non-gouvernementales (ONG) humanitaires restent peu documentées. Les évaluateurs, parties prenantes et décideurs en contexte humanitaire souhaitant assurer l’UÉ pérenne détiennent peu de repères puisque rares sont les études examinant l’UÉ et ses conditions à long terme. La présente thèse tend à clarifier ces enjeux en documentant sur une période de deux ans l’UÉ et les conditions qui la détermine, au sein d’une stratégie d’évaluation intégrée au programme d’exemption de paiement des soins de santé d’une ONG humanitaire. L’objectif de ce programme est de faciliter l’accès à la santé aux mères, aux enfants de moins de cinq ans et aux indigents de districts sanitaires au Niger et au Burkina Faso, régions du Sahel où des crises alimentaires et économiques ont engendré des taux élevés de malnutrition, de morbidité et de mortalité. Une première évaluation du programme d’exemption au Niger a mené au développement de la stratégie d’évaluation intégrée à ce même programme au Burkina Faso. La thèse se compose de trois articles. Le premier présente une étude d’évaluabilité, étape préliminaire à la thèse et permettant de juger de sa faisabilité. Les résultats démontrent une logique cohérente et plausible de la stratégie d’évaluation, l’accessibilité de données et l’utilité d’étudier l’UÉ par l’ONG. Le second article documente l’UÉ des parties prenantes de la stratégie et comment celle-ci servit le programme d’exemption. L’utilisation des résultats fut instrumentale, conceptuelle et persuasive, alors que l’utilisation des processus ne fut qu’instrumentale et conceptuelle. Le troisième article documente les conditions qui, selon les parties prenantes, ont progressivement influencé l’UÉ. L’attitude des utilisateurs, les relations et communications interpersonnelles et l’habileté des évaluateurs à mener et à partager les connaissances adaptées aux besoins des utilisateurs furent les conditions clés liées à l’UÉ. La thèse contribue à l’avancement des connaissances sur l’UÉ en milieu humanitaire et apporte des recommandations aux parties prenantes de l’ONG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

User behaviour is a significant determinant of a product’s environmental impact; while engineering advances permit increased efficiency of product operation, the user’s decisions and habits ultimately have a major effect on the energy or other resources used by the product. There is thus a need to change users’ behaviour. A range of design techniques developed in diverse contexts suggest opportunities for engineers, designers and other stakeholders working in the field of sustainable innovation to affect users’ behaviour at the point of interaction with the product or system, in effect ‘making the user more efficient’. Approaches to changing users’ behaviour from a number of fields are reviewed and discussed, including: strategic design of affordances and behaviour-shaping constraints to control or affect energyor other resource-using interactions; the use of different kinds of feedback and persuasive technology techniques to encourage or guide users to reduce their environmental impact; and context-based systems which use feedback to adjust their behaviour to run at optimum efficiency and reduce the opportunity for user-affected inefficiency. Example implementations in the sustainable engineering and ecodesign field are suggested and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The business system known as Pyramid does today not provide its user with a reasonable system regarding case management for support issues. The current system in place requires the customer to contact its provider via telephone to register new cases. In addition to this, current system doesn’t include any way for the user to view any of their current cases without contacting the provider.A solution to this issue is to migrate the current case management system from a telephone contact to a web based platform, where customers could easier access their current cases, but also directly through the website create new cases. This new system would reduce the time required to manually manage each individual case, for both customer and provider, resulting in an overall reduction in cost for both parties.The result is a system divided into two different sections, the first one is an API created in Pyramid that acts as a web service, and the second one a website which customers can connect to. The website will allow users to overview their current cases, but also the option to create new cases directly through the site. All the information used to the website is obtained through the web service inside Pyramid. Analyzing the final design of the system, the developers where able to conclude both positive and negative aspects of the systems’ final design. If the platform chosen was the optimal choice or not, and also what can be include if the system is further developed, will be discussed.The development process and the method used during development will also be analyzed and discussed, what positive and negative aspects that where encountered. In addition to this the cause and effect of a development team smaller than the suggested size will also be analyzed. Lastly an analysis of actions that could’ve been made in order to prevent certain issues from occurring will.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid development of Internet technologies, video and audio processing are among the most important parts due to the constant requirements of high quality media contents. Along with the improvement of network environment and the hardware equipment, this demand is becoming more and more imperious, people prefer high quality videos and audios as well as the net streaming media resources. FFmpeg is a set of open source program about the A/V decoding. Many commercial players use FFmpeg as their displaying cores. This paper designed a simple and easy-to-use video player based on FFmpeg. The first part is about the basic theories and related knowledge of video displaying, including some concepts like data formats, streaming media data, video coding and decoding. In a word, the realization of the video player depend on the a set of video decoding process. The general idea about the process is to get the video packets from the Internet, to read the related protocols and de-encapsulate the protocols, to de-encapsulate the packaging data and to get encoded formats data, to decode them to pixel data that can be displayed directly through graphics cards. During the coding and decoding process, there could be different degrees of data losing, which is called lossy compression, but it usually does not influence the quality of user experiences. The second part is about the principle of the FFmpeg decoding process, that is one of the key point of the paper. In this project, FFmpeg is used for the main decoding task, by call some main functions and structures from FFmpeg class libraries, packaging video formats could be transfer to pixel data, after getting the pixel data, SDL is used for the displaying process. The third part is about the SDL displaying flow. Similarly, it would invoke some important displaying functions from SDL class libraries to realize the function, though SDL is able to do not only displaying task, but also many other game playing process. After that, a independent video displayer is completed, it is provided with all the key function of a player. The fourth part make a simple users interface for the player based on the MFC program, it enable the player could be used by most people. At last, in consideration of the mobile Internet’s blossom, people nowadays can hardly ever drop their mobile phones, there is a brief introduction about how to transplant the video player to Android platform which is one of the most used mobile systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent paradigms in wireless communication architectures describe environments where nodes present a highly dynamic behavior (e.g., User Centric Networks). In such environments, routing is still performed based on the regular packet-switched behavior of store-and-forward. Albeit sufficient to compute at least an adequate path between a source and a destination, such routing behavior cannot adequately sustain the highly nomadic lifestyle that Internet users are today experiencing. This thesis aims to analyse the impact of the nodes’ mobility on routing scenarios. It also aims at the development of forwarding concepts that help in message forwarding across graphs where nodes exhibit human mobility patterns, as is the case of most of the user-centric wireless networks today. The first part of the work involved the analysis of the mobility impact on routing, and we found that node mobility significance can affect routing performance, and it depends on the link length, distance, and mobility patterns of nodes. The study of current mobility parameters showed that they capture mobility partially. The routing protocol robustness to node mobility depends on the routing metric sensitivity to node mobility. As such, mobility-aware routing metrics were devised to increase routing robustness to node mobility. Two categories of routing metrics proposed are the time-based and spatial correlation-based. For the validation of the metrics, several mobility models were used, which include the ones that mimic human mobility patterns. The metrics were implemented using the Network Simulator tool using two widely used multi-hop routing protocols of Optimized Link State Routing (OLSR) and Ad hoc On Demand Distance Vector (AODV). Using the proposed metrics, we reduced the path re-computation frequency compared to the benchmark metric. This means that more stable nodes were used to route data. The time-based routing metrics generally performed well across the different node mobility scenarios used. We also noted a variation on the performance of the metrics, including the benchmark metric, under different mobility models, due to the differences in the node mobility governing rules of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present, in large precast concrete enterprises, the management over precast concrete component has been chaotic. Most enterprises take labor-intensive manual input method, which is time consuming and laborious, and error-prone. Some other slightly better enterprises choose to manage through bar-code or printing serial number manually. However, on one hand, this is also labor-intensive, on the other hand, this method is limited by external environment, making the serial number blur or even lost, and also causes a big problem on production traceability and quality accountability. Therefore, to realize the enterprise’s own rapid development and cater to the needs of the time, to achieve the automated production management has been a big problem for a modern enterprise. In order to solve the problem, inefficiency in production and traceability of the products, this thesis try to introduce RFID technology into the production of PHC tubular pile. By designing a production management system of precast concrete components, the enterprise will achieve the control of the entire production process, and realize the informatization of enterprise production management. RFID technology has been widely used in many fields like entrance control, charge management, logistics and so on. RFID technology will adopt passive RFID tag, which is waterproof, shockproof, anti-interference, so it’s suitable for the actual working environment. The tag will be bound to the precast component steel cage (the structure of the PHC tubular pile before the concrete placement), which means each PHC tubular pile will have a unique ID number. Then according to the production procedure, the precast component will be performed with a series of actions, put the steel cage into the mold, mold clamping, pouring concrete (feed), stretching, centrifugalizing, maintenance, mold removing, welding splice. In every session of the procedure, the information of the precast components can be read through a RFID reader. Using a portable smart device connected to the database, the user can check, inquire and management the production information conveniently. Also, the system can trace the production parameter and the person in charge, realize the traceability of the information. This system can overcome the disadvantages in precast components manufacturers, like inefficiency, error-prone, time consuming, labor intensity, low information relevance and so on. This system can help to improve the production management efficiency, and can produce a good economic and social benefits, so, this system has a certain practical value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper addresses the role played by research-based spin-offs (RBSOs) as knowledge dissemination mechanisms, through their position in knowledge networks. For this purpose the paper analyses the formal networks established by the Portuguese RBSOs in the context of publicly funded research, technology and pre-commercial product development projects, and investigates their configuration along two levels. At organisational level, in order to understand whether RBSOs extend their reach beyond the academic sphere; and if they do, whether they relate with similar firms or connect to organisations located downstream in the knowledge value chain, and which is their position in networks involving both research organisations and other firms. At spatial level, in order to understand whether RBSOs extend their reach beyond the region where they are created, thus potentially acting as connectors between diverse regions. The analysis starts from the population of RBSOs created in Portugal until 2007 (387) and identifies those that have established formal technological relationships as part of projects funded by all the programmes launched in the period 1993-2012. As a result, the analysis encompasses 192 collaborative projects and involves 82 spin-offs and 281 partners, of which only 20% are research organisations, the remaining being other firms and a variety of other user organisations. The results, although still preliminary, provide some insights into the knowledge networking behaviour of the RBSOs. As expected, research organisations are a central actor in spin-offs’ networks, being the sole partner for some of them. But half of the RBSOs have moved beyond the academic sphere, being frequently a central element in tripartite technological relationships between research and other organisations and occupying an intermediation position in the network, thus potentially acting as facilitators in knowledge circulation and transformation. Also as expected, RBSOs are predominantly located in the main metropolitan areas and tend to relate with organisations similarly located. But while geographical proximity emerges as important in the choice of partners, in about half of the cases, RBSOs knowledge networks have extended beyond regional boundaries. Given their central position in the network this suggests a role as connectors across regions that will be explored in subsequent research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces the L1 Adaptive Control Toolbox, a set of tools implemented in Matlab that aid in the design process of an L1 adaptive controller and enable the user to construct simulations of the closed-loop system to verify its performance. Following a brief review of the existing theory on L1 adaptive controllers, the interface of the toolbox is presented, including a description of the functions accessible to the user. Two novel algorithms for determining the required sampling period of a piecewise constant adaptive law are presented and their implementation in the toolbox is discussed. The detailed description of the structure of the toolbox is provided as well as a discussion of the implementation of the creation of simulations. Finally, the graphical user interface is presented and described in detail, including the graphical design tools provided for the development of the filter C(s). The thesis closes with suggestions for further improvement of the toolbox.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There may be advantages to be gained by combining Case-Based Reasoning (CBR) techniques with numerical models. In this paper we consider how CBR can be used as a flexible query engine to improve the usability of numerical models. Particularly they can help to solve inverse and mixed problems, and to solve constraint problems. We discuss this idea with reference to the illustrative example of a pneumatic conveyor. We describe a model of the problem of particle degradation in such a conveyor, and the problems faced by design engineers. The solution of these problems requires a system that allows iterative sharing of control between user, CBR system, and numerical model. This multi-initiative interaction is illustrated for the pneumatic conveyor by means of Unified Modeling Language (UML) collaboration and sequence diagrams. We show approaches to the solution of these problems via a CBR tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing availability and popularity of opinion rich resources on the online web resources, such as review sites and personal blogs, has made it convenient to find out about the opinions and experiences of layman people. But, simultaneously, this huge eruption of data has made it difficult to reach to a conclusion. In this thesis, I develop a novel recommendation system, Recomendr that can help users digest all the reviews about an entity and compare candidate entities based on ad-hoc dimensions specified by keywords. It expects keyword specified ad-hoc dimensions/features as input from the user and based on those features; it compares the selected range of entities using reviews provided on the related User Generated Contents (UGC) e.g. online reviews. It then rates the textual stream of data using a scoring function and returns the decision based on an aggregate opinion to the user. Evaluation of Recomendr using a data set in the laptop domain shows that it can effectively recommend the best laptop as per user-specified dimensions such as price. Recomendr is a general system that can potentially work for any entities on which online reviews or opinionated text is available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Droplet microfluidics is an active multidisciplinary area of research that evolved out of the larger field of microfluidics. It enables the user to handle, process and manipulate micrometer-sized emulsion droplets on a micro- fabricated platform. The capability to carry out a large number of individual experiments per unit time makes the droplet microfluidic technology an ideal high-throughput platform for analysis of biological and biochemical samples. The objective of this thesis was to use such a technology for designing systems with novel implications in the newly emerging field of synthetic biology. Chapter 4, the first results chapter, introduces a novel method of droplet coalescence using a flow-focusing capillary device. In Chapter 5, the development of a microfluidic platform for the fabrication of a cell-free micro-environment for site-specific gene manipulation and protein expression is described. Furthermore, a novel fluorescent reporter system which functions both in vivo and in vitro is introduced in this chapter. Chapter 6 covers the microfluidic fabrication of polymeric vesicles from poly(2-methyloxazoline-b-dimethylsiloxane-b-2-methyloxazoline) tri-block copolymer. The polymersome made from this polymer was used in the next Chapter for the study of a chimeric membrane protein called mRFP1-EstA∗. In Chapter 7, the application of microfluidics for the fabrication of synthetic biological membranes to recreate artificial cell- like chassis structures for reconstitution of a membrane-anchored protein is described.