212 resultados para distributed application
Resumo:
Enrofloxacin (ENR) is an antimicrobial used both in humans and in food producing species. Its control is required in farmed species and their surroundings in order to reduce the prevalence of antibiotic resistant bacteria. Thus, a new biomimetic sensor enrofloxacin is presented. An artificial host was imprinted in specific polymers. These were dispersed in 2-nitrophenyloctyl ether and entrapped in a poly(vinyl chloride) matrix. The potentiometric sensors exhibited a near-Nernstian response. Slopes expressing mV/Δlog([ENR]/M) varied within 48–63. The detection limits ranged from 0.28 to 1.01 µg mL−1. Sensors were independent from the pH of test solutions within 4–7. Good selectivity was observed toward potassium, calcium, barium, magnesium, glycine, ascorbic acid, creatinine, norfloxacin, ciprofloxacin, and tetracycline. In flowing media, the biomimetic sensors presented good reproducibility (RSD of ± 0.7%), fast response, good sensitivity (47 mV/Δlog([ENR]/M), wide linear range (1.0 × 10−5–1.0 × 10−3 M), low detection limit (0.9 µg mL−1), and a stable baseline for a 5 × 10−2 M acetate buffer (pH 4.7) carrier. The sensors were used to analyze fish samples. The method offered the advantages of simplicity, accuracy, and automation feasibility. The sensing membrane may contribute to the development of small devices allowing in vivo measurements of enrofloxacin or parent-drugs.
Resumo:
11th IEEE World Conference on Factory Communication Systems (WFCS 2015). 27 to 29, May, 2015, TII-SS-2: Scheduling and Performance Analysis. Palma de Mallorca, Spain.
Resumo:
In this paper, the fractional Fourier transform (FrFT) is applied to the spectral bands of two component mixture containing oxfendazole and oxyclozanide to provide the multicomponent quantitative prediction of the related substances. With this aim in mind, the modulus of FrFT spectral bands are processed by the continuous Mexican Hat family of wavelets, being denoted by MEXH-CWT-MOFrFT. Four modulus sets are obtained for the parameter a of the FrFT going from 0.6 up to 0.9 in order to compare their effects upon the spectral and quantitative resolutions. Four linear regression plots for each substance were obtained by measuring the MEXH-CWT-MOFrFT amplitudes in the application of the MEXH family to the modulus of the FrFT. This new combined powerful tool is validated by analyzing the artificial samples of the related drugs, and it is applied to the quality control of the commercial veterinary samples.
Resumo:
Distributed real-time systems such as automotive applications are becoming larger and more complex, thus, requiring the use of more powerful hardware and software architectures. Furthermore, those distributed applications commonly have stringent real-time constraints. This implies that such applications would gain in flexibility if they were parallelized and distributed over the system. In this paper, we consider the problem of allocating fixed-priority fork-join Parallel/Distributed real-time tasks onto distributed multi-core nodes connected through a Flexible Time Triggered Switched Ethernet network. We analyze the system requirements and present a set of formulations based on a constraint programming approach. Constraint programming allows us to express the relations between variables in the form of constraints. Our approach is guaranteed to find a feasible solution, if one exists, in contrast to other approaches based on heuristics. Furthermore, approaches based on constraint programming have shown to obtain solutions for these type of formulations in reasonable time.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].
Resumo:
XXXIII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC 2015), III Workshop de Comunicação em Sistemas Embarcados Críticos. Vitória, Brasil.
Resumo:
This contribution introduces the fractional calculus (FC) fundamental mathematical aspects and discuses some of their consequences. Based on the FC concepts, the chapter reviews the main approaches for implementing fractional operators and discusses the adoption of FC in control systems. Finally are presented some applications in the areas of modeling and control, namely fractional PID, heat diffusion systems, electromagnetism, fractional electrical impedances, evolutionary algorithms, robotics, and nonlinear system control.
Resumo:
Fractional Calculus (FC) goes back to the beginning of the theory of differential calculus. Nevertheless, the application of FC just emerged in the last two decades. In the field of dynamical systems theory some work has been carried out but the proposed models and algorithms are still in a preliminary stage of establishment. Having these ideas in mind, the paper discusses a FC perspective in the study of the dynamics and control of mechanical systems.
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
Background Over the years, food industry wastes have been the focus of a growing interest due to their content in high added-value compounds. A good example are the olive oil by-products (OOBP), which retain a great amount of phenolic compounds during olive oil production. Their structure and biological properties justify their potential use as antioxidants in other food products. The efficient recovery of phenolic compounds has been extensively studied and optimized in order to maximize their reintroduction in the food chain and contribute to a higher valorization and better management of wastes from olive oil industry. Scope and approach This paper reviews the most representative phenolic compounds described in OOBP and their biological properties. New extraction procedures to efficiently recover these compounds and the most advanced chromatographic techniques that have been used for a better understanding of the phenolic profile of these complex matrices are also referred. Finally, this paper reports the main applications of OOBP, with emphasis on their phenolic content as natural antioxidants for food applications. Key findings and conclusions Besides their antioxidant activity, phenolic compounds from OOBP have also shown antimicrobial and antitumoral properties. Their application as food antioxidants requires new extraction techniques, including the use of non-toxic solvents and, in a pilot scale, the use of filters and adsorbent resins. The inclusion of phenolic compounds from OOBP in some food matrices have improved not only their antioxidant capacity but also their sensory attributes.
Resumo:
A vitamin E extraction method for rainbow trout flesh was optimized, validated, and applied in fish fed commercial and Gracilaria vermiculophylla-supplemented diets. Five extraction methods were compared. Vitamers were analyzed by HPLC/DAD/fluorescence. A solid-liquid extraction with n-hexane, which showed the best performance, was optimized and validated. Among the eight vitamers, only α- and γ-tocopherol were detected in muscle samples. The final method showed good linearity (>0.999), intra- (<3.1%) and inter-day precision (<2.6%), and recoveries (>96%). Detection and quantification limits were 39.9 and 121.0 ng/g of muscle, for α-tocopherol, and 111.4 ng/g and 337.6 ng/g, for γ-tocopherol, respectively. Compared to the control group, the dietary inclusion of 5% G. vermiculophylla resulted in a slight reduction of lipids in muscle and, consequently, of α- and γ-tocopherol. Nevertheless, vitamin E profile in lipids was maintained. In general, the results may be explained by the lower vitamin E level in seaweed-containing diet. Practical Applications: Based on the validation results and the low solvent consumption, the developed method can be used to analyze vitamin E in rainbow trout. The results of this work are also a valuable information source for fish feed industries and aquaculture producers, which can focus on improving seaweed inclusion in feeds as a source of vitamin E in fish muscle and, therefore, take full advantage of all bioactive components with an important role in fish health and flesh quality.
Resumo:
Smart Grids (SGs) have emerged as the new paradigm for power system operation and management, being designed to include large amounts of distributed energy resources. This new paradigm requires new Energy Resource Management (ERM) methodologies considering different operation strategies and the existence of new management players such as several types of aggregators. This paper proposes a methodology to facilitate the coalition between distributed generation units originating Virtual Power Players (VPP) considering a game theory approach. The proposed approach consists in the analysis of the classifications that were attributed by each VPP to the distributed generation units, as well as in the analysis of the previous established contracts by each player. The proposed classification model is based in fourteen parameters including technical, economical and behavioural ones. Depending of the VPP strategies, size and goals, each parameter has different importance. VPP can also manage other type of energy resources, like storage units, electric vehicles, demand response programs or even parts of the MV and LV distribution network. A case study with twelve VPPs with different characteristics and one hundred and fifty real distributed generation units is included in the paper.
Resumo:
Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the present paper is proposes a methodology focusing on the aggregation of consumers and generators, providing adequate tolls for the demand response program’s adoption by evolved players. The methodology proposed in the present paper focuses on a Virtual Power Player that manages and aggregates the available demand response and distributed generation resources in order to satisfy the required electrical energy demand and reserve. The aggregation of resources is addressed by the use of clustering algorithms, and operation costs for the VPP are minimized. The presented case study is based on a set of 32 consumers and 66 distributed generation units, running on 180 distinct operation scenarios.
Resumo:
The vision of the Internet of Things (IoT) includes large and dense deployment of interconnected smart sensing and monitoring devices. This vast deployment necessitates collection and processing of large volume of measurement data. However, collecting all the measured data from individual devices on such a scale may be impractical and time consuming. Moreover, processing these measurements requires complex algorithms to extract useful information. Thus, it becomes imperative to devise distributed information processing mechanisms that identify application-specific features in a timely manner and with a low overhead. In this article, we present a feature extraction mechanism for dense networks that takes advantage of dominance-based medium access control (MAC) protocols to (i) efficiently obtain global extrema of the sensed quantities, (ii) extract local extrema, and (iii) detect the boundaries of events, by using simple transforms that nodes employ on their local data. We extend our results for a large dense network with multiple broadcast domains (MBD). We discuss and compare two approaches for addressing the challenges with MBD and we show through extensive evaluations that our proposed distributed MBD approach is fast and efficient at retrieving the most valuable measurements, independent of the number sensor nodes in the network.