970 resultados para Parallel programming
Resumo:
The increasing design complexity associated with modern Field Programmable Gate Array (FPGA) has prompted the emergence of 'soft'-programmable processors which attempt to replace at least part of the custom circuit design problem with a problem of programming parallel processors. Despite substantial advances in this technology, its performance and resource efficiency for computationally complex operations remains in doubt. In this paper we present the first recorded implementation of a softcore Fast-Fourier Transform (FFT) on Xilinx Virtex FPGA technology. By employing a streaming processing architecture, we show how it is possible to achieve architectures which offer 1.1 GSamples/s throughput and up to 19 times speed-up against the Xilinx Radix-2 FFT dedicated circuit with comparable cost.
Resumo:
Routine molecular diagnostics modalities are unable to confidently detect low frequency mutations (<5-15%) that may indicate response to targeted therapies. We confirm the presence of a low frequency NRAS mutation in a rectal cancer patient using massively parallel sequencing when previous Sanger sequencing results proved negative and Q-PCR testing inconclusive. There is increasing evidence that these low frequency mutations may confer resistance to anti-EGFR therapy. In view of negative/inconclusive Sanger sequencing and Q-PCR results for NRAS mutations in a KRAS wt rectal case, the diagnostic biopsy and 4 distinct subpopulations of cells in the resection specimen after conventional chemo/radiotherapy were massively parallel sequenced using the Ion Torrent PGM. DNA was derived from FFPE rectal cancer tissue and amplicons produced using the Cancer Hotspot Panel V2 and sequenced using semiconductor technology. NRAS mutations were observed at varying frequencies in the patient biopsy (12.2%) and all four subpopulations of cells in the resection with an average frequency of 7.3% (lowest 2.6%). The results of the NGS also provided the mutational status of 49 other genes that may have prognostic or predictive value, including KRAS and PIK3CA. NGS technology has been postulated in diagnostics because of its capability to generate results in large panels of clinically meaningful genes in a cost-effective manner. This case illustrates another potential advantage of this technology: its use for detecting low frequency mutations that may influence therapeutic decisions in cancer treatment.
Resumo:
An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to very accurate inferences. The approach can also be specialized to classification with credal networks based on the maximality criterion. A complexity analysis for both the problem and the algorithm is reported together with numerical experiments, which confirm the good performance of the method. While the inner approximation produced by the algorithm gives rise to a classifier which might return a subset of the optimal class set, preliminary empirical results suggest that the accuracy of the optimal class set is seldom affected by the approximate probabilities
Resumo:
This paper investigates the profile of teachers in the island of Ireland who declared themselves willing to undertake professional development activities in programming, in particular to master programming by taking on-line courses involving the design of computer games. Using the Technology Acceptance Model (TAM), it compares scores for teachers “willing” to undertake the courses with scores for those who declined, and examines other differences between the groups of respondents. Findings reflect the perceived difficulties of programming and the current low status accorded to the subject in Ireland. The paper also reviews the use of games-based learning as a “hook” to engage learners in programming and discusses the role of gamification as a tool for motivating learners in an on-line course. The on-line course focusing on games design was met with enthusiasm, and there was general consensus that gamification was appropriate for motivating learners in structured courses such as those provided.
Resumo:
ABSTRACT
The proliferation in the use of video lecture capture in universities worldwide presents an opportunity to analyse video watching patterns in an attempt to quantify and qualify how students engage and learn with the videos. It also presents an opportunity to investigate if there are similar student learning patterns during the equivalent physical lecture. The goal of this action based research project was to capture and quantitatively analyse the viewing behaviours and patterns of a series of video lecture captures across several university Java programming modules. It sought to study if a quantitative analysis of viewing behaviours of Lecture Capture videos coupled with a qualitative evaluation from the students and lecturers could be correlated to provide generalised patterns that could then be used to understand the learning experience of students during videos and potentially face to face lectures and, thereby, present opportunities to reflectively enhance lecturer performance and the students’ overall learning experience. The report establishes a baseline understanding of the analytics of videos of several commonly used pedagogical teaching methods used in the delivery of programming courses. It reflects on possible concurrences within live lecture delivery with the potential to inform and improve lecturing performance.
Resumo:
This case study deals with the role of time series analysis in sociology, and its relationship with the wider literature and methodology of comparative case study research. Time series analysis is now well-represented in top-ranked sociology journals, often in the form of ‘pooled time series’ research designs. These studies typically pool multiple countries together into a pooled time series cross-section panel, in order to provide a larger sample for more robust and comprehensive analysis. This approach is well suited to exploring trans-national phenomena, and for elaborating useful macro-level theories specific to social structures, national policies, and long-term historical processes. It is less suited however, to understanding how these global social processes work in different countries. As such, the complexities of individual countries - which often display very different or contradictory dynamics than those suggested in pooled studies – are subsumed. Meanwhile, a robust literature on comparative case-based methods exists in the social sciences, where researchers focus on differences between cases, and the complex ways in which they co-evolve or diverge over time. A good example of this is the inequality literature, where although panel studies suggest a general trend of rising inequality driven by the weakening power of labour, marketisation of welfare, and the rising power of capital, some countries have still managed to remain resilient. This case study takes a closer look at what can be learned by applying the insights of case-based comparative research to the method of time series analysis. Taking international income inequality as its point of departure, it argues that we have much to learn about the viability of different combinations of policy options by examining how they work in different countries over time. By taking representative cases from different welfare systems (liberal, social democratic, corporatist, or antipodean), we can better sharpen our theories of how policies can be more specifically engineered to offset rising inequality. This involves a fundamental realignment of the strategy of time series analysis, grounding it instead in a qualitative appreciation of the historical context of cases, as a basis for comparing effects between different countries.
Resumo:
Markov Decision Processes (MDPs) are extensively used to encode sequences of decisions with probabilistic effects. Markov Decision Processes with Imprecise Probabilities (MDPIPs) encode sequences of decisions whose effects are modeled using sets of probability distributions. In this paper we examine the computation of Γ-maximin policies for MDPIPs using multilinear and integer programming. We discuss the application of our algorithms to “factored” models and to a recent proposal, Markov Decision Processes with Set-valued Transitions (MDPSTs), that unifies the fields of probabilistic and “nondeterministic” planning in artificial intelligence research.
Resumo:
Energy consumption is an important concern in modern multicore processors. The energy consumed by a multicore processor during the execution of an application can be minimized by tuning the hardware state utilizing knobs such as frequency, voltage etc. The existing theoretical work on energy minimization using Global DVFS (Dynamic Voltage and Frequency Scaling), despite being thorough, ignores the time and the energy consumed by the CPU on memory accesses and the dynamic energy consumed by the idle cores. This article presents an analytical energy-performance model for parallel workloads that accounts for the time and the energy consumed by the CPU chip on memory accesses in addition to the time and energy consumed by the CPU on CPU instructions. In addition, the model we present also accounts for the dynamic energy consumed by the idle cores. The existing work on global DVFS for parallel workloads shows that using a single frequency for the entire duration of a parallel application is not energy optimal and that varying the frequency according to the changes in the parallelism of the workload can save energy. We present an analytical framework around our energy-performance model to predict the operating frequencies (that depend upon the amount of parallelism) for global DVFS that minimize the overall CPU energy consumption. We show how the optimal frequencies in our model differ from the optimal frequencies in a model that does not account for memory accesses. We further show how the memory intensity of an application affects the optimal frequencies.
Resumo:
Some plants of genus Schinus have been used in the folk medicine as topical antiseptic, digestive, purgative, diuretic, analgesic or antidepressant, and also for respiratory and urinary infections. Chemical composition of essential oils of S. molle and S. terebinthifolius had been evaluated and presented high variability according with the part of the plant studied and with the geographic and climatic regions. The pharmacological properties, namely antimicrobial, anti-tumoural and anti-inflammatory activities are conditioned by chemical composition of essential oils. Taking into account the difficulty to infer the pharmacological properties of Schinus essential oils without hard experimental approach, this work will focus on the development of a decision support system, in terms of its knowledge representation and reasoning procedures, under a formal framework based on Logic Programming, complemented with an approach to computing centered on Artificial Neural Networks and the respective Degree-of-Confidence that one has on such an occurrence.
Resumo:
The problem of determining a maximum matching or whether there exists a perfect matching, is very common in a large variety of applications and as been extensively studied in graph theory. In this paper we start to introduce a characterisation of a family of graphs for which its stability number is determined by convex quadratic programming. The main results connected with the recognition of this family of graphs are also introduced. It follows a necessary and sufficient condition which characterise a graph with a perfect matching and an algorithmic strategy, based on the determination of the stability number of line graphs, by convex quadratic programming, applied to the determination of a perfect matching. A numerical example for the recognition of graphs with a perfect matching is described. Finally, the above algorithmic strategy is extended to the determination of a maximum matching of an arbitrary graph and some related results are presented.
Resumo:
The main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.
Resumo:
In the field of control systems it is common to use techniques based on model adaptation to carry out control for plants for which mathematical analysis may be intricate. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this line, this paper gives a perspective on the quality of results given by two different biologically connected learning algorithms for the design of B-spline neural networks (BNN) and fuzzy systems (FS). One approach used is the Genetic Programming (GP) for BNN design and the other is the Bacterial Evolutionary Algorithm (BEA) applied for fuzzy rule extraction. Also, the facility to incorporate a multi-objective approach to the GP algorithm is outlined, enabling the designer to obtain models more adequate for their intended use.
Resumo:
The design phase of B-spline neural networks is a highly computationally complex task. Existent heuristics have been found to be highly dependent on the initial conditions employed. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this paper, the Bacterial Programming approach is presented, which is based on the replication of the microbial evolution phenomenon. This technique produces an efficient topology search, obtaining additionally more consistent solutions.