980 resultados para Processing times
Resumo:
Pressure myography studies have played a crucial role in our understanding of vascular physiology and pathophysiology. Such studies depend upon the reliable measurement of changes in the diameter of isolated vessel segments over time. Although several software packages are available to carry out such measurements on small arteries and veins, no such software exists to study smaller vessels (<50 µm in diameter). We provide here a new, freely available open-source algorithm, MyoTracker, to measure and track changes in the diameter of small isolated retinal arterioles. The program has been developed as an ImageJ plug-in and uses a combination of cost analysis and edge enhancement to detect the vessel walls. In tests performed on a dataset of 102 images, automatic measurements were found to be comparable to those of manual ones. The program was also able to track both fast and slow constrictions and dilations during intraluminal pressure changes and following application of several drugs. Variability in automated measurements during analysis of videos and processing times were also investigated and are reported. MyoTracker is a new software to assist during pressure myography experiments on small isolated retinal arterioles. It provides fast and accurate measurements with low levels of noise and works with both individual images and videos. Although the program was developed to work with small arterioles, it is also capable of tracking the walls of other types of microvessels, including venules and capillaries. It also works well with larger arteries, and therefore may provide an alternative to other packages developed for larger vessels when its features are considered advantageous.
Resumo:
A new heuristic based on Nawaz–Enscore–Ham (NEH) algorithm is proposed for solving permutation flowshop scheduling problem in this paper. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion for the objective of minimizing both makespan and machine idle-time. Statistical tests illustrate better solution quality of the proposed algorithm, comparing to existing benchmark heuristics.
Resumo:
Air frying is being projected as an alternative to deep fat frying for producing snacks such as French Fries. In air frying, the raw potato sections are essentially heated in hot air containing fine oil droplets, which dehydrates the potato and attempts to impart the characteristics of traditionally produced French fries, but with a substantially lower level of fat absorbed in the product. The aim of this research is to compare: 1) the process dynamics of air frying with conventional deep fat frying under otherwise similar operating conditions, and 2) the products formed by the two processes in terms of color, texture, microstructure, calorimetric properties and sensory characteristics Although, air frying produced products with a substantially lower fat content but with similar moisture contents and color characteristics, it required much longer processing times, typically 21 minutes in relation to 9 minutes in the case of deep fat frying. The slower evolution of temperature also resulted in lower rates of moisture loss and color development reactions. DSC studies revealed that the extent of starch gelatinization was also lower in the case of air fried product. In addition, the two types of frying also resulted in products having significantly different texture and sensory characteristics.
Resumo:
The green bean has organoleptic and nutritional characteristics that make it an important food source in tropical regions such as the Northeast of Brazil. It is a cheap source of protein and important for nutrition of rural population contributing significantly in subsistence farming of the families from Brazil s northeast. It is consumed in entire region and together with the dry meat and other products composes the menu of typical restaurants, being characterized as an important product for economy of Northeast. The green bean is consumed freshly harvested and has short cycle, being characterized as a very perishable food, which hampers your market. The drying method is an alternative to increase the lifetime and provide a reduction volume of this product making easier your transportation and storage. However is necessary to search ways of drying which keep the product quality not only from the nutritional standpoint but also organoleptic. Some characteristics may change with the drying process such as the coloring, the rehydration capacity and the grains cooking time. The decrease of drying time or of exposure of the grains to high temperature minimizes the effects related with the product quality loss. Among the techniques used to reduce the drying time and improve some characteristics of the product, stands out the osmotic dehydration, widely used in combined processes such as the pretreatment in drying food. Currently the use of the microwaves has been considered an alternative for drying food. The microwave energy generates heat inside of materials processed and the heating is practically instantaneous, resulting in shorter processing times and product quality higher to that obtained by conventional methods. Considering the importance of the green beans for the Northeast region, the wastefulness of production due to seasonality of the crop and your high perishability, the proposal of this thesis is the study of drying grain by microwaves with and without osmotic pretreatment, focusing on the search of conditions of processes which favor the rehydration of the product preserving your organoleptic characteristics. Based on the analysis of the results of osmotic dehydration and dielectric properties was defined the operating condition to be used in pretreatment of the green bean, with osmotic concentration in saline solution containing 12,5% of sodium chloride, at 40°C for 20 minutes. The drying of green bean by microwave was performed with and without osmotic pretreatment on the optimized condition. The osmotic predehydration favored the additional drying, reducing the process time. The rehydration of dehydrated green bean with and without osmotic pretreatment was accomplished in different temperature conditions and immersion time according to a factorial design 22, with 3 repetitions at the central point. According to results the better condition was obtained with the osmotically pretreated bean and rehydrated at a temperature of 60°C for 90 minutes. Sensory analysis was performed comparing the sample of the green bean in natura and rehydrated in optimized conditions, with and without osmotic pretreatment. All samples showed a good acceptance rate regarding the analyzed attributes (appearance, texture, color, odor and taste), with all values above 70%. Is possible conclude that the drying of green bean by microwave with osmotic pretreatment is feasible both in respect to technical aspects and rehydration rates and sensory quality of the product
Resumo:
Minimizing the makespan of a flow-shop no-wait (FSNW) schedule where the processing times are randomly distributed is an important NP-Complete Combinatorial Optimization Problem. In spite of this, it can be found only in very few papers in the literature. By considering the Start Interval Concept, this problem can be formulated, in a practical way, in function of the probability of the success in preserve FSNW constraints for all tasks execution. With this formulation, for the particular case with 3 machines, this paper presents different heuristics solutions: by integrating local optimization steps with insertion procedures and by using genetic algorithms for search the solution space. Computational results and performance evaluations are commented. Copyright (C) 1998 IFAC.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia e Ciência de Alimentos - IBILCE
Resumo:
Os métodos numéricos de Elementos Finitos e Equação Integral são comumente utilizados para investigações eletromagnéticas na Geofísica, e, para essas modelagens é importante saber qual algoritmo é mais rápido num certo modelo geofísico. Neste trabalho são feitas comparações nos resultados de tempo computacional desses dois métodos em modelos bidimensionais com heterogeneidades condutivas num semiespaço resistivo energizados por uma linha infinita de corrente (com 1000Hz de freqüência) e situada na superfície paralelamente ao "strike" das heterogeneidades. Após a validação e otimização dos programas analisamos o comportamento dos tempos de processamento nos modelos de corpos retangulares variandose o tamanho, o número e a inclinação dos corpos. Além disso, investigamos nesses métodos as etapas que demandam maior custo computacional. Em nossos modelos, o método de Elementos Finitos foi mais vantajoso que o de Equação Integral, com exceção na situação de corpos com baixa condutividade ou com geometria inclinada.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Química - IQ
Resumo:
This work deals with the car sequencing (CS) problem, a combinatorial optimization problem for sequencing mixed-model assembly lines. The aim is to find a production sequence for different variants of a common base product, such that work overload of the respective line operators is avoided or minimized. The variants are distinguished by certain options (e.g., sun roof yes/no) and, therefore, require different processing times at the stations of the line. CS introduces a so-called sequencing rule H:N for each option, which restricts the occurrence of this option to at most H in any N consecutive variants. It seeks for a sequence that leads to no or a minimum number of sequencing rule violations. In this work, CS’ suitability for workload-oriented sequencing is analyzed. Therefore, its solution quality is compared in experiments to the related mixed-model sequencing problem. A new sequencing rule generation approach as well as a new lower bound for the problem are presented. Different exact and heuristic solution methods for CS are developed and their efficiency is shown in experiments. Furthermore, CS is adjusted and applied to a resequencing problem with pull-off tables.
Resumo:
This paper reports on the application of full-body radiography to nontraumatic emergency situations. The Lodox Statscan is an X-ray machine capable of imaging the entire body in 13 seconds using linear slit scanning radiography (LSSR). Nontraumatic emergency applications in ventriculoperitoneal (VP) shunt visualisation, emergency room arteriography (ERA), detection of foreign bodies, and paediatric emergency imaging are presented. Reports show that the fast, full-body, and low-dose scanning capabilities of the Lodox system make it well suited to these applications, with the same or better image quality, faster processing times, and lower dose to patients. In particular, the large format scans allowing visualisation of a greater area of anatomy make it well suited to VP shunt monitoring, ERA, and the detection of foreign bodies. Whilst more studies are required, it can be concluded that the Lodox Statscan has the potential for widespread use in these and other nontraumatic emergency radiology applications.
Resumo:
We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.
Resumo:
This paper presents a methodology for adapting an advanced communication system for deaf people in a new domain. This methodology is a user-centered design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. In this paper, the new considered domain has been the dialogues in a hotel reception. With this methodology, it was possible to develop the system in a few months, obtaining very good performance: good speech recognition and translation rates (around 90%) with small processing times.
Resumo:
Esta tesis presenta un novedoso marco de referencia para el análisis y optimización del retardo de codificación y descodificación para vídeo multivista. El objetivo de este marco de referencia es proporcionar una metodología sistemática para el análisis del retardo en codificadores y descodificadores multivista y herramientas útiles en el diseño de codificadores/descodificadores para aplicaciones con requisitos de bajo retardo. El marco de referencia propuesto caracteriza primero los elementos que tienen influencia en el comportamiento del retardo: i) la estructura de predicción multivista, ii) el modelo hardware del codificador/descodificador y iii) los tiempos de proceso de cuadro. En segundo lugar, proporciona algoritmos para el cálculo del retardo de codificación/ descodificación de cualquier estructura arbitraria de predicción multivista. El núcleo de este marco de referencia consiste en una metodología para el análisis del retardo de codificación/descodificación multivista que es independiente de la arquitectura hardware del codificador/descodificador, completada con un conjunto de modelos que particularizan este análisis del retardo con las características de la arquitectura hardware del codificador/descodificador. Entre estos modelos, aquellos basados en teoría de grafos adquieren especial relevancia debido a su capacidad de desacoplar la influencia de los diferentes elementos en el comportamiento del retardo en el codificador/ descodificador, mediante una abstracción de su capacidad de proceso. Para revelar las posibles aplicaciones de este marco de referencia, esta tesis presenta algunos ejemplos de su utilización en problemas de diseño que afectan a codificadores y descodificadores multivista. Este escenario de aplicación cubre los siguientes casos: estrategias para el diseño de estructuras de predicción que tengan en consideración requisitos de retardo además del comportamiento tasa-distorsión; diseño del número de procesadores y análisis de los requisitos de velocidad de proceso en codificadores/ descodificadores multivista dado un retardo objetivo; y el análisis comparativo del comportamiento del retardo en codificadores multivista con diferentes capacidades de proceso e implementaciones hardware. ABSTRACT This thesis presents a novel framework for the analysis and optimization of the encoding and decoding delay for multiview video. The objective of this framework is to provide a systematic methodology for the analysis of the delay in multiview encoders and decoders and useful tools in the design of multiview encoders/decoders for applications with low delay requirements. The proposed framework characterizes firstly the elements that have an influence in the delay performance: i) the multiview prediction structure ii) the hardware model of the encoder/decoder and iii) frame processing times. Secondly, it provides algorithms for the computation of the encoding/decoding delay of any arbitrary multiview prediction structure. The core of this framework consists in a methodology for the analysis of the multiview encoding/decoding delay that is independent of the hardware architecture of the encoder/decoder, which is completed with a set of models that particularize this delay analysis with the characteristics of the hardware architecture of the encoder/decoder. Among these models, the ones based in graph theory acquire special relevance due to their capacity to detach the influence of the different elements in the delay performance of the encoder/decoder, by means of an abstraction of its processing capacity. To reveal possible applications of this framework, this thesis presents some examples of its utilization in design problems that affect multiview encoders and decoders. This application scenario covers the following cases: strategies for the design of prediction structures that take into consideration delay requirements in addition to the rate-distortion performance; design of number of processors and analysis of processor speed requirements in multiview encoders/decoders given a target delay; and comparative analysis of the encoding delay performance of multiview encoders with different processing capabilities and hardware implementations.