947 resultados para parallel processing
Resumo:
TLE in infancy has been the subject of varied research. Topographical and structural evidence is coincident with the neuronal systems responsible for auditory processing of the highest specialization and complexity. Recent studies have been showing the need of a hemispheric asymmetry for an optimization in central auditory processing (CAP) and acquisition and learning of a language system. A new functional research paradigm is required to study mental processes that require methods of cognitive-sensory information analysis processed in very short periods of time (msec), such as the ERPs. Thus, in this article, we hypothesize that the TLE in infancy could be a good model for topographic and functional study of CAP and its development process, contributing to a better understanding of the learning difficulties that children with this neurological disorder have.
Resumo:
Alheiras are a traditional, smoked, fermented meat sausage, produced in Portugal, with an undeniable cultural and gastronomic legacy. In this study, we assessed the nutritional value of this product, as well as the influence of different types of thermal processing. Alheiras from Mirandela were submitted to six different procedures: microwave, skillet, oven, charcoal grill, electric fryer and electric grill. Protein, fat, carbohydrate, minerals, NaCl, and cholesterol contents, as well as fatty acid profile were evaluated. The results show that alheiras are not hypercaloric but an unbalanced foodstuff (high levels of proteins and lipids) and the type of processing has a major impact on their nutritional value. Charcoal grill is the healthiest option: less fat (12.5 g/100 g) and cholesterol (29.3 mg/100 g), corresponding to a lower caloric intake (231.8 kcal, less 13% than the raw ones). Inversely, fried alheiras presented the worst nutritional profile, with the highest levels of fat (18.1 g/100 g) and cholesterol (76.0 g/100 g).
Resumo:
Relatório do Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
The main purpose of this work is to present and to interpret the change of structure and physical properties of tantalum oxynitride (TaNxOy) thin films, produced by dc reactive magnetron sputtering, by varying the processing parameters. A set of TaNxOy films was prepared by varying the reactive gases flow rate, using a N2/O2 gas mixture with a concentration ratio of 17:3. The different films, obtained by this process, exhibited significant differences. The obtained composition and the interpretation of X-ray diffraction results, shows that, depending on the partial pressure of the reactive gases, the films are: essentially dark grey metallic, when the atomic ratio (N + O)/Ta < 0.1, evidencing a tetragonal β-Ta structure; grey-brownish, when 0.1 < (N + O)/Ta < 1, exhibiting a face-centred cubic (fcc) TaN-like structure; and transparent oxide-type, when (N + O)/Ta > 1, evidencing the existence of Ta2O5, but with an amorphous structure. These transparent films exhibit refractive indexes, in the visible region, always higher than 2.0. The wear resistance of the films is relatively good. The best behaviour was obtained for the films with (N + O)/Ta ≈ 0.5 and (N + O)/Ta ≈ 1.3.
Resumo:
Prostate cancer (PCa) is one of the most incident malignancies worldwide. Although efficient therapy is available for early-stage PCa, treatment of advanced disease is mainly ineffective and remains a clinical challenge. microRNA (miRNA) dysregulation is associated with PCa development and progression. In fact, several studies have reported a widespread downregulation of miRNAs in PCa, which highlights the importance of studying compounds capable of restoring the global miRNA expression. The main aim of this study was to define the usefulness of enoxacin as an anti-tumoral agent in PCa, due to its ability to induce miRNA biogenesis in a TRBP-mediated manner. Using a panel of five PCa cell lines, we observed that all of them were wild type for the TARBP2 gene and expressed TRBP protein. Furthermore, primary prostate carcinomas displayed normal levels of TRBP protein. Remarkably, enoxacin was able to decrease cell viability, induce apoptosis, cause cell cycle arrest, and inhibit the invasiveness of cell lines. Enoxacin was also effective in restoring the global expression of miRNAs. This study is the first to show that PCa cells are highly responsive to the anti-tumoral effects of enoxacin. Therefore, enoxacin constitutes a promising therapeutic agent for PCa.
Resumo:
In this paper we survey the most relevant results for the prioritybased schedulability analysis of real-time tasks, both for the fixed and dynamic priority assignment schemes. We give emphasis to the worst-case response time analysis in non-preemptive contexts, which is fundamental for the communication schedulability analysis. We define an architecture to support priority-based scheduling of messages at the application process level of a specific fieldbus communication network, the PROFIBUS. The proposed architecture improves the worst-case messages’ response time, overcoming the limitation of the first-come-first-served (FCFS) PROFIBUS queue implementations.
Resumo:
Coronary artery disease (CAD) is currently one of the most prevalent diseases in the world population and calcium deposits in coronary arteries are one direct risk factor. These can be assessed by the calcium score (CS) application, available via a computed tomography (CT) scan, which gives an accurate indication of the development of the disease. However, the ionising radiation applied to patients is high. This study aimed to optimise the protocol acquisition in order to reduce the radiation dose and explain the flow of procedures to quantify CAD. The main differences in the clinical results, when automated or semiautomated post-processing is used, will be shown, and the epidemiology, imaging, risk factors and prognosis of the disease described. The software steps and the values that allow the risk of developingCADto be predicted will be presented. A64-row multidetector CT scan with dual source and two phantoms (pig hearts) were used to demonstrate the advantages and disadvantages of the Agatston method. The tube energy was balanced. Two measurements were obtained in each of the three experimental protocols (64, 128, 256 mAs). Considerable changes appeared between the values of CS relating to the protocol variation. The predefined standard protocol provided the lowest dose of radiation (0.43 mGy). This study found that the variation in the radiation dose between protocols, taking into consideration the dose control systems attached to the CT equipment and image quality, was not sufficient to justify changing the default protocol provided by the manufacturer.
Resumo:
This paper proposes a global multiprocessor scheduling algorithm for the Linux kernel that combines the global EDF scheduler with a priority-aware work-stealing load balancing scheme, enabling parallel real-time tasks to be executed on more than one processor at a given time instant. We state that some priority inversion may actually be acceptable, provided it helps reduce contention, communication, synchronisation and coordination between parallel threads, while still guaranteeing the expected system’s predictability. Experimental results demonstrate the low scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Multicore platforms have transformed parallelism into a main concern. Parallel programming models are being put forward to provide a better approach for application programmers to expose the opportunities for parallelism by pointing out potentially parallel regions within tasks, leaving the actual and dynamic scheduling of these regions onto processors to be performed at runtime, exploiting the maximum amount of parallelism. It is in this context that this paper proposes a scheduling approach that combines the constant-bandwidth server abstraction with a priority-aware work-stealing load balancing scheme which, while ensuring isolation among tasks, enables parallel tasks to be executed on more than one processor at a given time instant.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
Cooperating objects (COs) is a recently coined term used to signify the convergence of classical embedded computer systems, wireless sensor networks and robotics and control. We present essential elements of a reference architecture for scalable data processing for the CO paradigm.