160 resultados para Embedded predication
Resumo:
Adopting standard-based weblab infrastructures can be an added value for spreading their influence and acceptance in education. This paper suggests a solution based on the IEEE1451.0 Std. and FPGA technology for creating reconfigurable weblab infrastructures using Instruments and Modules (I&Ms) described through standard Hardware Description Language (HDL) files. It describes a methodology for creating and binding I&Ms into an IEEE1451-module embedded in a FPGA-based board able to be remotely controlled/accessed using IEEE1451-HTTP commands. At the end, an example of a step-motor controller module bond to that IEEE1451-module is described.
Resumo:
Learning management systems are routinely used for presenting, solving and grading exercises with large classes. However, teachers are constrained to use questions with pre-defined answers, such as multiple-choice, to automatically correct the exercises of their students. Complex exercises cannot be evaluated automatically by the LMS and require the coordination of a set of heterogeneous systems. For instance, programming exercises require a specialized exercise resolution environment and automatic evaluation features, each provided by a different type of system. In this paper, the authors discuss an approach for the coordination of a network of eLearning systems supporting the resolution of exercises. The proposed approach is based on a pivot component embedded in the LMS and has two main roles: 1) provide an exercise resolution environment, and 2) coordinate communication between the LMS and other systems, exposing their functions as web services. The integration of the pivot component in the LMS relies on Learning Tools Interoperability (LTI). This paper presents an architecture to coordinate a network of eLearning systems and validate the proposed approach by creating such a network integrated with LMS from two different vendors.
Resumo:
This paper presents a low-cost scaled model of a silo for drying and airing cereal grains. It allows the control and monitor of several parameters associated to the silo's operation, through a remote accessible infrastructure. The scaled model consists of a 2.50 m wide × 2.10 m long plant with all control and monitor capacities provided by micro-Web servers. An application running on the micro-Web servers enables storing all parameters in a data basis for later analysis. The implemented model aims to support a remote experimentation facility for technological education, research-oriented tutorials, and industrial applications. Given the low-cost requirement, this remote facility can be easily replicated in other institutions to support a network of remote labs, which encompasses the concurrent access of several users (e.g. students).
Resumo:
This study is based on a previous experimental work in which embedded cylindrical heaters were applied to a pultrusion machine die, and resultant energetic performance compared with that achieved with the former heating system based on planar resistances. The previous work allowed to conclude that the use of embedded resistances enhances significantly the energetic performance of pultrusion process, leading to 57% decrease of energy consumption. However, the aforementioned study was developed with basis on an existing pultrusion die, which only allowed a single relative position for the heaters. In the present work, new relative positions for the heaters were investigated in order to optimise heat distribution process and energy consumption. Finite Elements Analysis was applied as an efficient tool to identify the best relative position of the heaters into the die, taking into account the usual parameters involved in the process and the control system already tested in the previous study. The analysis was firstly developed based on eight cylindrical heaters located in four different location plans. In a second phase, in order to refine the results, a new approach was adopted using sixteen heaters with the same total power. Final results allow to conclude that the correct positioning of the heaters can contribute to about 10% of energy consumption reduction, decreasing the production costs and leading to a better eco-efficiency of pultrusion process.
Resumo:
The LMS plays an indisputable role in the majority of the eLearning environments. This eLearning system type is often used for presenting, solving and grading simple exercises. However, exercises from complex domains, such as computer programming, require heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. This work presents a standard approach for the coordination of a network of eLearning systems supporting the resolution of exercises. The proposed approach use a pivot component embedded in the LMS with two roles: provide an exercise resolution environment and coordinate the communication between the LMS and other systems exposing their functions as web services. The integration of the pivot component with the LMS relies on the Learning Tools Interoperability. The validation of this approach is made through the integration of the component with LMSs from two vendors.
Resumo:
This study addresses to the optimization of pultrusion manufacturing process from the energy-consumption point of view. The die heating system of external platen heaters commonly used in the pultrusion machines is one of the components that contribute the most to the high consumption of energy of pultrusion process. Hence, instead of the conventional multi-planar heaters, a new internal die heating system that leads to minor heat losses is proposed. The effect of the number and relative position of the embedded heaters along the die is also analysed towards the setting up of the optimum arrangement that minimizes both the energy rate and consumption. Simulation and optimization processes were greatly supported by Finite Element Analysis (FEA) and calibrated with basis on the temperature profile computed through thermography imaging techniques. The main outputs of this study allow to conclude that the use of embedded cylindrical resistances instead of external planar heaters leads to drastic reductions of both the power consumption and the warm-up periods of the die heating system. For the analysed die tool and process, savings on energy consumption up to 60% and warm-up period stages less than an half hour were attained with the new internal heating system. The improvements achieved allow reducing the power requirements on pultrusion process, and thus minimize industrial costs and contribute to a more sustainable pultrusion manufacturing industry.
Resumo:
TICEduca. III Congresso Internacional TIC e Educação. 14 a 16 Novembro, Lisboa
Resumo:
This work aims to design a synthetic construct that mimics the natural bone extracellular matrix through innovative approaches based on simultaneous type I collagen electrospinning and nanophased hydroxyapatite (nanoHA) electrospraying using non-denaturating conditions and non-toxic reagents. The morphological results, assessed using scanning electron microscopy and atomic force microscopy (AFM), showed a mesh of collagen nanofibers embedded with crystals of HA with fiber diameters within the nanometer range (30 nm), thus significantly lower than those reported in the literature, over 200 nm. The mechanical properties, assessed by nanoindentation using AFM, exhibited elastic moduli between 0.3 and 2 GPa. Fourier transformed infrared spectrometry confirmed the collagenous integrity as well as the presence of nanoHA in the composite. The network architecture allows cell access to both collagen nanofibers and HA crystals as in the natural bone environment. The inclusion of nanoHA agglomerates by electrospraying in type I collagen nanofibers improved the adhesion and metabolic activity of MC3T3-E1 osteoblasts. This new nanostructured collagen–nanoHA composite holds great potential for healing bone defects or as a functional membrane for guided bone tissue regeneration and in treating bone diseases.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Energy consumption is one of the major issues for modern embedded systems. Early, power saving approaches mainly focused on dynamic power dissipation, while neglecting the static (leakage) energy consumption. However, technology improvements resulted in a case where static power dissipation increasingly dominates. Addressing this issue, hardware vendors have equipped modern processors with several sleep states. We propose a set of leakage-aware energy management approaches that reduce the energy consumption of embedded real-time systems while respecting the real-time constraints. Our algorithms are based on the race-to-halt strategy that tends to run the system at top speed with an aim to create long idle intervals, which are used to deploy a sleep state. The effectiveness of our algorithms is illustrated with an extensive set of simulations that show an improvement of up to 8% reduction in energy consumption over existing work at high utilization. The complexity of our algorithms is smaller when compared to state-of-the-art algorithms. We also eliminate assumptions made in the related work that restrict the practical application of the respective algorithms. Moreover, a novel study about the relation between the use of sleep intervals and the number of pre-emptions is also presented utilizing a large set of simulation results, where our algorithms reduce the experienced number of pre-emptions in all cases. Our results show that sleep states in general can save up to 30% of the overall number of pre-emptions when compared to the sleep-agnostic earliest-deadline-first algorithm.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
BACKGROUND: Bladder cancer is a significant health problem in rural areas of Africa and the Middle East where Schistosoma haematobium is prevalent, supporting an association between malignant transformation and infection by this blood fluke. Nevertheless, the molecular mechanisms linking these events are poorly understood. Bladder cancers in infected populations are generally diagnosed at a late stage since there is a lack of non-invasive diagnostic tools, hence enforcing the need for early carcinogenesis markers. METHODOLOGY/PRINCIPAL FINDINGS: Forty-three formalin-fixed paraffin-embedded bladder biopsies of S. haematobium-infected patients, consisting of bladder tumours, tumour adjacent mucosa and pre-malignant/malignant urothelial lesions, were screened for bladder cancer biomarkers. These included the oncoprotein p53, the tumour proliferation rate (Ki-67>17%), cell-surface cancer-associated glycan sialyl-Tn (sTn) and sialyl-Lewisa/x (sLea/sLex), involved in immune escape and metastasis. Bladder tumours of non-S. haematobium etiology and normal urothelium were used as controls. S. haematobium-associated benign/pre-malignant lesions present alterations in p53 and sLex that were also found in bladder tumors. Similar results were observed in non-S. haematobium associated tumours, irrespectively of their histological nature, denoting some common molecular pathways. In addition, most benign/pre-malignant lesions also expressed sLea. However, proliferative phenotypes were more prevalent in lesions adjacent to bladder tumors while sLea was characteristic of sole benign/pre-malignant lesions, suggesting it may be a biomarker of early carcionogenesis associated with the parasite. A correlation was observed between the frequency of the biomarkers in the tumor and adjacent mucosa, with the exception of Ki-67. Most S. haematobium eggs embedded in the urothelium were also positive for sLea and sLex. Reinforcing the pathologic nature of the studied biomarkers, none was observed in the healthy urothelium. CONCLUSION/SIGNIFICANCE: This preliminary study suggests that p53 and sialylated glycans are surrogate biomarkers of bladder cancerization associated with S. haematobium, highlighting a missing link between infection and cancer development. Eggs of S. haematobium express sLea and sLex antigens in mimicry of human leukocytes glycosylation, which may play a role in the colonization and disease dissemination. These observations may help the early identification of infected patients at a higher risk of developing bladder cancer and guide the future development of non-invasive diagnostic tests.
Resumo:
The application of mathematical methods and computer algorithms in the analysis of economic and financial data series aims to give empirical descriptions of the hidden relations between many complex or unknown variables and systems. This strategy overcomes the requirement for building models based on a set of ‘fundamental laws’, which is the paradigm for studying phenomena usual in physics and engineering. In spite of this shortcut, the fact is that financial series demonstrate to be hard to tackle, involving complex memory effects and a apparently chaotic behaviour. Several measures for describing these objects were adopted by market agents, but, due to their simplicity, they are not capable to cope with the diversity and complexity embedded in the data. Therefore, it is important to propose new measures that, on one hand, are highly interpretable by standard personal but, on the other hand, are capable of capturing a significant part of the dynamical effects.