550 resultados para execute


Relevância:

10.00% 10.00%

Publicador:

Resumo:

For people with cognitive disabilities, technology is more often thought of as a support mechanism, rather than a source of division that may require intervention to equalize access across the cognitive spectrum. This paper presents a first attempt at formalizing the digital gap created by the generalization of search engines. This was achieved through the development of a mapping of cognitive abilities required by users to execute low- level tasks during a standard Web search task. The mapping demonstrates how critical these abilities are to successfully use search engines with an adequate level of independence. It will lead to a set of design guidelines for search engine interfaces that will allow for the engagement of users of all abilities, and also, more importantly, in search algorithms such as query suggestion and measure of relevance (i.e. ranking).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering is an important technique in organising and categorising web scale documents. The main challenges faced in clustering the billions of documents available on the web are the processing power required and the sheer size of the datasets available. More importantly, it is nigh impossible to generate the labels for a general web document collection containing billions of documents and a vast taxonomy of topics. However, document clusters are most commonly evaluated by comparison to a ground truth set of labels for documents. This paper presents a clustering and labeling solution where the Wikipedia is clustered and hundreds of millions of web documents in ClueWeb12 are mapped on to those clusters. This solution is based on the assumption that the Wikipedia contains such a wide range of diverse topics that it represents a small scale web. We found that it was possible to perform the web scale document clustering and labeling process on one desktop computer under a couple of days for the Wikipedia clustering solution containing about 1000 clusters. It takes longer to execute a solution with finer granularity clusters such as 10,000 or 50,000. These results were evaluated using a set of external data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a recommendation system that supports process participants in taking risk-informed decisions, with the goal of reducing risks that may arise during process execution. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a business process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we suggest to the participant the action to perform which minimizes the predicted process risk. Risks are predicted by traversing decision trees generated from the logs of past process executions, which consider process data, involved resources, task durations and other information elements like task frequencies. When applied in the context of multiple process instances running concurrently, a second technique is employed that uses integer linear programming to compute the optimal assignment of resources to tasks to be performed, in order to deal with the interplay between risks relative to different instances. The recommendation system has been implemented as a set of components on top of the YAWL BPM system and its effectiveness has been evaluated using a real-life scenario, in collaboration with risk analysts of a large insurance company. The results, based on a simulation of the real-life scenario and its comparison with the event data provided by the company, show that the process instances executed concurrently complete with significantly fewer faults and with lower fault severities, when the recommendations provided by our recommendation system are taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present our work on tele-operating a complex humanoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their electromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasping of objects) on the 7 DOF arm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With organisations facing significant challenges to remain competitive, Business Process Improvement (BPI) initiatives are often conducted to improve the efficiency and effectiveness of their business processes, focussing on time, cost, and quality improvements. Event logs which contain a detailed record of business operations over a certain time period, recorded by an organisation's information systems, are the first step towards initiating evidence-based BPI activities. Given an (original) event log as a starting point, an approach to explore better ways to execute a business process was developed, resulting in an improved (perturbed) event log. Identifying the differences between the original event log and the perturbed event log can provide valuable insights, helping organisations to improve their processes. However, there is a lack of automated techniques to detect the differences between two event logs. Therefore, this research aims to develop visualisation techniques to provide targeted analysis of resource reallocation and activity rescheduling. The differences between two event logs are first identified. The changes between the two event logs are conceptualised and realised with a number of visualisations. With the proposed visualisations, analysts will then be able to identify the changes related to resource and time, resulting in a more efficient business process. Ultimately, analysts can make use of this comparative information to initiate evidence-based BPI activities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Light emitting field effect transistors (LEFETs) are emerging as a multi-functional class of optoelectronic devices. LEFETs can simultaneously execute light emission and the standard logic functions of a transistor in a single architecture. However, current LEFET architectures deliver either high brightness or high efficiency but not both concurrently, thus limiting their use in technological applications. Here we show an LEFET device strategy that simultaneously improves brightness and efficiency. The key step change in LEFET performance arises from the bottom gate top-contact device architecture in which the source/drain electrodes are semitransparent and the active channel contains a bi-layer comprising of a high mobility charge-transporting polymer, and a yellow-green emissive polymer. A record external quantum efficiency (EQE) of 2.1% at 1000cd/m2 is demonstrated for polymer based bilayer LEFETs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the design and implementation of a single shared bus, shared memory multiprocessing system using Intel's single board computers is presented. The hardware configuration and the operating system developed to execute the parallel algorithms are discussed. The performance evaluation studies carried out on Image are outlined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aptitude-based student selection: A study concerning the admission processes of some technically oriented healthcare degree programmes in Finland (Orthotics and Prosthetics, Dental Technology and Optometry). The data studied consisted of conveniencesamples of preadmission information and the results of the admission processes of three technically oriented healthcare degree programmes (Orthotics and Prosthetics, Dental Technology and Optometry) in Finland during the years 1977-1986 and 2003. The number of the subjects tested and interviewed in the first samples was 191, 615 and 606, and in the second 67, 64 and 89, respectively. The questions of the six studies were: I. How were different kinds of preadmission data related to each other? II. Which were the major determinants of the admission decisions? III. Did the graduated students and those who dropped out differ from each other? IV. Was it possible to predict how well students would perform in the programmes? V. How was the student selection executed in the year 2003? VI. Should clinical vs. statistical prediction or both be used? (Some remarks are presented on Meehl's argument: "Always, we might as well face it, the shadow of the statistician hovers in the background; always the actuary will have the final word.") The main results of the study were as follows: Ability tests, dexterity tests and judgements of personality traits (communication skills, initiative, stress tolerance and motivation) provided unique, non-redundant information about the applicants. Available demographic variables did not bias the judgements of personality traits. In all three programme settings, four-factor solutions (personality, reasoning, gender-technical and age-vocational with factor scores) could be extracted by the Maximum Likelihood method with graphical Varimax rotation. The personality factor dominated the final aptitude judgements and very strongly affected the selection decisions. There were no clear differences between graduated students and those who had dropped out in regard to the four factors. In addition, the factor scores did not predict how well the students performed in the programmes. Meehl's argument on the uncertainty of clinical prediction was supported by the results, which on the other hand did not provide any relevant data for rules on statistical prediction. No clear arguments for or against the aptitude-based student selection was presented. However, the structure of the aptitude measures and their impact on the admission process are now better known. The concept of "personal aptitude" is not necessarily included in the values and preferences of those in charge of organizing the schooling. Thus, obviously the most well-founded and cost-effective way to execute student selection is to rely on e.g. the grade point averages of the matriculation examination and/or written entrance exams. This procedure, according to the present study, would result in a student group which has a quite different makeup (60%) from the group selected on the basis of aptitude tests. For the recruiting organizations, instead, "personal aptitude" may be a matter of great importance. The employers, of course, decide on personnel selection. The psychologists, if consulted, are responsible for the proper use of psychological measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parkinson’s disease (PD) is the second most common neurodegenerative disease among the elderly. Its etiology is unknown and no disease-modifying drugs are available. Thus, more information concerning its pathogenesis is needed. Among other genes, mutated PTEN-induced kinase 1 (PINK1) has been linked to early-onset and sporadic PD, but its mode of action is poorly understood. Most animal models of PD are based on the use of the neurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). MPTP is metabolized to MPP+ by monoamine oxidase B (MAO B) and causes cell death of dopaminergic neurons in the substantia nigra in mammals. Zebrafish has been a widely used model organism in developmental biology, but is now emerging as a model for human diseases due to its ideal combination of properties. Zebrafish are inexpensive and easy to maintain, develop rapidly, breed in large quantities producing transparent embryos, and are readily manipulated by various methods, particularly genetic ones. In addition, zebrafish are vertebrate animals and results derived from zebrafish may be more applicable to mammals than results from invertebrate genetic models such as Drosophila melanogaster and Caenorhabditis elegans. However, the similarity cannot be taken for granted. The aim of this study was to establish and test a PD model using larval zebrafish. The developing monoaminergic neuronal systems of larval zebrafish were investigated. We identified and classified 17 catecholaminergic and 9 serotonergic neuron populations in the zebrafish brain. A 3-dimensional atlas was created to facilitate future research. Only one gene encoding MAO was found in the zebrafish genome. Zebrafish MAO showed MAO A-type substrate specificity, but non-A-non-B inhibitor specificity. Distribution of MAO in larval and adult zebrafish brains was both diffuse and distinctly cellular. Inhibition of MAO during larval development led to markedly elevated 5-hydroxytryptamine (serotonin, 5-HT) levels, which decreased the locomotion of the fish. MPTP exposure caused a transient loss of cells in specific aminergic cell populations and decreased locomotion. MPTP-induced changes could be rescued by the MAO B inhibitor deprenyl, suggesting a role for MAO in MPTP toxicity. MPP+ affected only one catecholaminergic cell population; thus, the action of MPP+ was more selective than that of MPTP. The zebrafish PINK1 gene was cloned in zebrafish, and morpholino oligonucleotides were used to suppress its expression in larval zebrafish. The functional domains and expression pattern of zebrafish PINK1 resembled those of other vertebrates, suggesting that zebrafish is a feasible model for studying PINK1. Translation inhibition resulted in cell loss of the same catecholaminergic cell populations as MPTP and MPP+. Inactivation of PINK1 sensitized larval zebrafish to subefficacious doses of MPTP, causing a decrease in locomotion and cell loss in one dopaminergic cell population. Zebrafish appears to be a feasible model for studying PD, since its aminergic systems, mode of action of MPTP, and functions of PINK1 resemble those of mammalians. However, the functions of zebrafish MAO differ from the two forms of MAO found in mammals. Future studies using zebrafish PD models should utilize the advantages specific to zebrafish, such as the ability to execute large-scale genetic or drug screens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, three parallel polygon scan conversion algorithms have been proposed, and their performance when executed on a shared bus architecture has been compared. It has been shown that the parallel algorithm that does not use edge coherence performs better than those that use edge coherence. Further, a multiprocessing architecture has been proposed to execute the parallel polygon scan conversion algorithms more efficiently than a single shared bus architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data flow computers are high-speed machines in which an instruction is executed as soon as all its operands are available. This paper describes the EXtended MANchester (EXMAN) data flow computer which incorporates three major extensions to the basic Manchester machine. As extensions we provide a multiple matching units scheme, an efficient, implementation of array data structure, and a facility to concurrently execute reentrant routines. A simulator for the EXMAN computer has been coded in the discrete event simulation language, SIMULA 67, on the DEC 1090 system. Performance analysis studies have been conducted on the simulated EXMAN computer to study the effectiveness of the proposed extensions. The performance experiments have been carried out using three sample problems: matrix multiplication, Bresenham's line drawing algorithm, and the polygon scan-conversion algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this thesis is to evaluate different means of increasing natural reproduction of migratory fish, especially salmon, in the river Kymijoki. The original stocks of migratory fish in Kymijoki were lost by the 1950s because of hydropower plants and worsened quality of water in the river. Nowadays the salmon stocks is based on hatchery-reared fish, even though there is significant potential of natural smolt production in the river. The main problem in the natural reproduction is that the migratory fish cannot ascend to the reproduction areas above the Korkeakoski and Koivukoski hydropower plants. In this thesis alternative projects which aim to open these ascencion routes and their costs and benefits are evaluated. The method used in the evaluation is social cost-benefit analysis. The alternative projects evaluated in this thesis consist of projects that aim to change the flow patterns between the eastern branches of Kymijoki and projects that involve building a fish ladder. Also different combinations of these projects are considered. The objective of this thesis is to find the project that is the most profitable to execute; this evaluation can be done in comparing the net present values of the projects. In addition to this, a sensitivity analysis will be made on the parameter values that are most uncertain. We compare the net present values of the projects with the net present values of hatchery-reared smolt releases, so we can evaluate, if the projects or the smolt releases are more socially profitable in the long term. The results of this thesis indicate that especially the projects that involve building a fish ladder next to the Korkeakoski hydropower plant are the most socially profitable. If this fish ladder would be built, the natural reproduction of salmon in the Kymijoki river could become so extensive, that hatchery-reared smolt releases could even be stopped. The results of the sensivity analysis indicate that the net present values of the projects depend especially on the initial smolt survival rate of wild salmon and the functioning of the potential fish ladder in Korkeakoski. Also the changes of other parameter values influence the results of the cost-benefit analysis, but not as significantly. When the net present values of the projects and the smolt releases are compared, the results depend on which period of time is selected to count the average catches of reared salmon. If the average of the last 5 years catches is used in counting the net benefits of smolt releases, all the alternative projects are more profitable than the releases. When the average of the last 10 years is used, only building of the fish ladder in Korkeakoski and all the project combinations are more profitable than the smolt releases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important question which has to be answered in evaluting the suitability of a microcomputer for a control application is the time it would take to execute the specified control algorithm. In this paper, we present a method of obtaining closed-form formulas to estimate this time. These formulas are applicable to control algorithms in which arithmetic operations and matrix manipulations dominate. The method does not require writing detailed programs for implementing the control algorithm. Using this method, the execution times of a variety of control algorithms on a range of 16-bit mini- and recently announced microcomputers are calculated. The formulas have been verified independently by an analysis program, which computes the execution time bounds of control algorithms coded in Pascal when they are run on a specified micro- or minicomputer.