948 resultados para parallel robots,cable driven,underactuated,calibration,sensitivity,accuracy
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
Trabalho Final de Mestrado elaborado no Laboratório Nacional de Engenharia Civil (LNEC) para a obtenção do grau de Mestre em Engenharia Civil pelo Instituto Superior de Engenharia de Lisboa no âmbito do protocolo de cooperação entre o ISEL e o LNEC
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química e Biológica
Resumo:
This paper proposes a global multiprocessor scheduling algorithm for the Linux kernel that combines the global EDF scheduler with a priority-aware work-stealing load balancing scheme, enabling parallel real-time tasks to be executed on more than one processor at a given time instant. We state that some priority inversion may actually be acceptable, provided it helps reduce contention, communication, synchronisation and coordination between parallel threads, while still guaranteeing the expected system’s predictability. Experimental results demonstrate the low scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
Real-time embedded applications require to process large amounts of data within small time windows. Parallelize and distribute workloads adaptively is suitable solution for computational demanding applications. The purpose of the Parallel Real-Time Framework for distributed adaptive embedded systems is to guarantee local and distributed processing of real-time applications. This work identifies some promising research directions for parallel/distributed real-time embedded applications.
Resumo:
Embedded real-time applications increasingly present high computation requirements, which need to be completed within specific deadlines, but that present highly variable patterns, depending on the set of data available in a determined instant. The current trend to provide parallel processing in the embedded domain allows providing higher processing power; however, it does not address the variability in the processing pattern. Dimensioning each device for its worst-case scenario implies lower average utilization, and increased available, but unusable, processing in the overall system. A solution for this problem is to extend the parallel execution of the applications, allowing networked nodes to distribute the workload, on peak situations, to neighbour nodes. In this context, this report proposes a framework to develop parallel and distributed real-time embedded applications, transparently using OpenMP and Message Passing Interface (MPI), within a programming model based on OpenMP. The technical report also devises an integrated timing model, which enables the structured reasoning on the timing behaviour of these hybrid architectures.
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Multicore platforms have transformed parallelism into a main concern. Parallel programming models are being put forward to provide a better approach for application programmers to expose the opportunities for parallelism by pointing out potentially parallel regions within tasks, leaving the actual and dynamic scheduling of these regions onto processors to be performed at runtime, exploiting the maximum amount of parallelism. It is in this context that this paper proposes a scheduling approach that combines the constant-bandwidth server abstraction with a priority-aware work-stealing load balancing scheme which, while ensuring isolation among tasks, enables parallel tasks to be executed on more than one processor at a given time instant.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.
Resumo:
Introdução – O melanoma maligno cutâneo (MMC) é considerado uma das mais letais neoplasias e no seu seguimento recorre-se, para além dos exames clínicos e da análise de marcadores tumorais, a diversos métodos imagiológicos, como é o exame Tomografia por Emissão de Positrões/Tomografia Computorizada (PET/CT, do acrónimo inglês Positron Emission Tomography/Computed Tomography) com 18fluor-fluorodeoxiglucose (18F-FDG). O presente estudo tem como objetivo avaliar a utilidade da PET/CT relativamente à análise da extensão e à suspeita de recidiva do MMC, comparando os achados imagiológicos com os descritos em estudos CT. Metodologia – Estudo retrospetivo de 62 estudos PET/CT realizados em 50 pacientes diagnosticados com MMC. Excluiu-se um estudo cujo resultado era duvidoso (nódulo pulmonar). As informações relativas aos resultados dos estudos anatomopatológicos e dos exames imagiológicos foram obtidas através da história clínica e dos relatórios médicos dos estudos CT e PET/CT. Foi criada uma base de dados com os dados recolhidos através do software Excel e foi efetuada uma análise estatística descritiva. Resultados – Dos estudos PET/CT analisados, 31 foram considerados verdadeiros positivos (VP), 28 verdadeiros negativos (VN), um falso positivo (FP) e um falso negativo (FN). A sensibilidade, especificidade, o valor preditivo positivo (VPP), o valor preditivo negativo (VPN) e a exatidão da PET/CT para o estadiamento e avaliação de suspeita de recidiva no MMC são, respetivamente, 96,9%, 96,6%, 96,9%, 96,6% e 96,7%. Dos resultados da CT considerados na análise estatística, 14 corresponderam a VP, 12 a VN, três a FP e cinco a FN. A sensibilidade, especificidade, o VPP e o VPN e a exatidão da CT para o estadiamento e avaliação de suspeita de recidiva no MMC são, respetivamente, 73,7%, 80,0%, 82,4%, 70,6% e 76,5%. Comparativamente aos resultados CT, a PET/CT permitiu uma mudança na atitude terapêutica em 23% dos estudos. Conclusão – A PET/CT é um exame útil na avaliação do MMC, caracterizando-se por uma maior acuidade diagnóstica no estadiamento e na avaliação de suspeita de recidiva do MMC comparativamente à CT isoladamente.
Resumo:
This article describes a finite element-based formulation for the statistical analysis of the response of stochastic structural composite systems whose material properties are described by random fields. A first-order technique is used to obtain the second-order statistics for the structural response considering means and variances of the displacement and stress fields of plate or shell composite structures. Propagation of uncertainties depends on sensitivities taken as measurement of variation effects. The adjoint variable method is used to obtain the sensitivity matrix. This method is appropriated for composite structures due to the large number of random input parameters. Dominant effects on the stochastic characteristics are studied analyzing the influence of different random parameters. In particular, a study of the anisotropy influence on uncertainties propagation of angle-ply composites is carried out based on the proposed approach.
Resumo:
OBJECTIVE To evaluate the level of HIV/AIDS knowledge among men who have sex with men in Brazil using the latent trait model estimated by Item Response Theory. METHODS Multicenter, cross-sectional study, carried out in ten Brazilian cities between 2008 and 2009. Adult men who have sex with men were recruited (n = 3,746) through Respondent Driven Sampling. HIV/AIDS knowledge was ascertained through ten statements by face-to-face interview and latent scores were obtained through two-parameter logistic modeling (difficulty and discrimination) using Item Response Theory. Differential item functioning was used to examine each item characteristic curve by age and schooling. RESULTS Overall, the HIV/AIDS knowledge scores using Item Response Theory did not exceed 6.0 (scale 0-10), with mean and median values of 5.0 (SD = 0.9) and 5.3, respectively, with 40.7% of the sample with knowledge levels below the average. Some beliefs still exist in this population regarding the transmission of the virus by insect bites, by using public restrooms, and by sharing utensils during meals. With regard to the difficulty and discrimination parameters, eight items were located below the mean of the scale and were considered very easy, and four items presented very low discrimination parameter (< 0.34). The absence of difficult items contributed to the inaccuracy of the measurement of knowledge among those with median level and above. CONCLUSIONS Item Response Theory analysis, which focuses on the individual properties of each item, allows measures to be obtained that do not vary or depend on the questionnaire, which provides better ascertainment and accuracy of knowledge scores. Valid and reliable scales are essential for monitoring HIV/AIDS knowledge among the men who have sex with men population over time and in different geographic regions, and this psychometric model brings this advantage.
Resumo:
Background: In Angola, malaria is an endemic disease having a major impact on the economy. The WHO recommends testing for all suspected malaria cases, to avoid the presumptive treatment of this disease. In malaria endemic regions laboratory technicians must be very comfortable with microscopy, the golden standard for malaria diagnosis, to avoid the incorrect diagnosis. The improper use of medication promotes drug resistance and undesirable side effects. The present study aims to assess the impact of a three-day refresher course on the knowledge of technicians, quality of blood smears preparation and accuracy of microscopy malaria diagnosis, using qPCR as reference method. Methods: This study was implemented in laboratories from three hospitals in different provinces of Angola: Bengo, Benguela and Luanda. In each laboratory samples were collected before and after the training course (slide with thin and thick blood smears, a dried blood spot and a form). The impact of the intervention was evaluated through a written test, the quality of slide preparation and the performance of microscopy. Results: It was found a significant increase on the written test median score, from 52.5% to 65.0%. A total of 973 slides were analysed to evaluate the quality of thick and thin blood smears. Considering all laboratories there was a significant increase in quality of thick and thin blood smears. To determine the performance of microscopy using qPCR as the reference method we used 1,028 samples. Benguela presented the highest values for specificity, 92.9% and 98.8% pre and post-course, respectively and for sensitivity the best pre-course was Benguela (75.9%) and post-course Luanda (75.0%). However, no significant increase in sensitivity and specificity after the training course was registered in any laboratory analysed. Discussion: The findings of this study support the need of continuous refresher training for microscopists and other laboratory staff. The laboratories should have a quality control programme to supervise the diagnosis and also to assess the periodicity of new training. However, other variables needed to be considered to have a correct malaria diagnosis, such as adequate equipment and reagents for staining and visualization, good working conditions, motivated and qualified personnel.