949 resultados para Processing Time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two assembly line balancing problems are addressed. The first problem (called SALBP-1) is to minimize number of linearly ordered stations for processing n partially ordered operations V = {1, 2, ..., n} within the fixed cycle time c. The second problem (called SALBP-2) is to minimize cycle time for processing partially ordered operations V on the fixed set of m linearly ordered stations. The processing time ti of each operation i ∈V is known before solving problems SALBP-1 and SALBP-2. However, during the life cycle of the assembly line the values ti are definitely fixed only for the subset of automated operations V\V . Another subset V ⊆ V includes manual operations, for which it is impossible to fix exact processing times during the whole life cycle of the assembly line. If j ∈V , then operation times tj can differ for different cycles of the production process. For the optimal line balance b of the assembly line with operation times t1, t2, ..., tn, we investigate stability of its optimality with respect to possible variations of the processing times tj of the manual operations j ∈ V .

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation establishes the foundation for a new 3-D visual interface integrating Magnetic Resonance Imaging (MRI) to Diffusion Tensor Imaging (DTI). The need for such an interface is critical for understanding brain dynamics, and for providing more accurate diagnosis of key brain dysfunctions in terms of neuronal connectivity. ^ This work involved two research fronts: (1) the development of new image processing and visualization techniques in order to accurately establish relational positioning of neuronal fiber tracts and key landmarks in 3-D brain atlases, and (2) the obligation to address the computational requirements such that the processing time is within the practical bounds of clinical settings. The system was evaluated using data from thirty patients and volunteers with the Brain Institute at Miami Children's Hospital. ^ Innovative visualization mechanisms allow for the first time white matter fiber tracts to be displayed alongside key anatomical structures within accurately registered 3-D semi-transparent images of the brain. ^ The segmentation algorithm is based on the calculation of mathematically-tuned thresholds and region-detection modules. The uniqueness of the algorithm is in its ability to perform fast and accurate segmentation of the ventricles. In contrast to the manual selection of the ventricles, which averaged over 12 minutes, the segmentation algorithm averaged less than 10 seconds in its execution. ^ The registration algorithm established searches and compares MR with DT images of the same subject, where derived correlation measures quantify the resulting accuracy. Overall, the images were 27% more correlated after registration, while an average of 1.5 seconds is all it took to execute the processes of registration, interpolation, and re-slicing of the images all at the same time and in all the given dimensions. ^ This interface was fully embedded into a fiber-tracking software system in order to establish an optimal research environment. This highly integrated 3-D visualization system reached a practical level that makes it ready for clinical deployment. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most pharmaceutically relevant proteins and many extracellular proteins contain disulfide bonds. Formation of the correct disulfide bonds is essential for stability in almost all cases. Disulfide containing proteins can be rapidly and inexpensively overexpressed in bacteria. However, the overexpressed proteins usually form aggregates inside the bacteria, called inclusion bodies, which contains inactive and non-native protein. To obtain native protein, inclusion bodies need to be isolated and resolubilized, and then the resulting protein refolded in vitro. In vitro protein folding is aided by the addition of a redox buffer, which is composed of a small molecule disulfide and/or a small molecule thiol. The most commonly used redox buffer contains reduced and oxidized glutathione. Recently, aliphatic dithiols and aromatic monothiols have been employed as redox buffers. Aliphatic dithiols improved the yield of native protein as compared to the aliphatic thiol, glutathione. Dithiols mimic the in vivo protein folding catalyst, protein disulfide isomerase, which has two thiols per active site. Furthermore, aromatic monothiols increased the folding rate and yield of lysozyme and RNase A relative to glutathione. By combining the beneficial properties of aliphatic dithiols and aromatic monothiols, aromatic dithiols were designed and were expected to increase in vitro protein folding rates and yields. Aromatic monothiols (1-4) and their corresponding disulfides (5-8), two series of ortho- and para-substituted ethylene glycol dithiols (9-15), and a series of aromatic quaternary ammonium salt dithiols (16-17) were synthesized on a multigram scale. Monothiols and disulfides (1-8) were utilized to fold lysozyme and bovine pancreatic trypsin inhibitor. Dithiols (11-17) were tested for their ability to fold lysozyme. At pH 7.0 and pH 8.0, and high protein concentration (1 mg/mL), aromatic dithiols (16, 17) and a monothiol (3) significantly enhanced the in vitro folding rate and yield of lysozyme relative to the aliphatic thiol, glutathione. Additionally, aromatic dithiols (16, 17) significantly enhance the folding yield as compared to the corresponding aromatic monothiol (3). Thus, the folding rate and yield enhancements achieved in in vitro protein folding at high protein concentration will decrease the volume of renaturation solution required for large scale processes and consequently reduce processing time and cost.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. ^ Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. ^ Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computational Intelligence Methods have been expanding to industrial applications motivated by their ability to solve problems in engineering. Therefore, the embedded systems follow the same idea of using computational intelligence tools embedded on machines. There are several works in the area of embedded systems and intelligent systems. However, there are a few papers that have joined both areas. The aim of this study was to implement an adaptive fuzzy neural hardware with online training embedded on Field Programmable Gate Array – FPGA. The system adaptation can occur during the execution of a given application, aiming online performance improvement. The proposed system architecture is modular, allowing different configurations of fuzzy neural network topologies with online training. The proposed system was applied to: mathematical function interpolation, pattern classification and selfcompensation of industrial sensors. The proposed system achieves satisfactory performance in both tasks. The experiments results shows the advantages and disadvantages of online training in hardware when performed in parallel and sequentially ways. The sequentially training method provides economy in FPGA area, however, increases the complexity of architecture actions. The parallel training method achieves high performance and reduced processing time, the pipeline technique is used to increase the proposed architecture performance. The study development was based on available tools for FPGA circuits.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Actually, Brazil is one of the larger fruit producer worldwide, with most of its production being consumed in nature way or either as juice or pulp. It is important to highlig ht in the fruit productive chain there are a lot lose due mainly to climate reasons, as well as storage, transportation, season, market, etc. It is known that in the pulp and fruit processing industy a yield of 50% (in mass) is usually obtained, with the other part discarded as waste. However, since most this waste has a high nutrient content it can be used to generate added - value products. In this case, drying plays an important role as an alternative process in order to improve these wastes generated by the fruit industry. However, despite the advantage of using this technique in order to improve such wastes, issues as a higher power demand as well as the thermal efficiency limitation should be addressed. Therefore, the control of the main variables in t his drying process is quite important in order to obtain operational conditions to produce a final product with the target specification as well as with a lower power cost. M athematical models can be applied to this process as a tool in order to optimize t he best conditions. The main aim of this work was to evaluate the drying behaviour of a guava industrial pulp waste using a batch system with a convective - tray dryer both experimentally and using mathematical modeling. In the experimental study , the dryin g carried out using a group of trays as well as the power consume were assayed as response to the effects of operational conditions (temperature, drying air flow rate and solid mass). Obtained results allowed observing the most significant variables in the process. On the other hand, the phenomenological mathematical model was validated and allowed to follow the moisture profile as well as the temperature in the solid and gas phases in every tray. Simulation results showed the most favorable procedure to o btain the minimum processing time as well as the lower power demand.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Launching centers are designed for scientific and commercial activities with aerospace vehicles. Rockets Tracking Systems (RTS) are part of the infrastructure of these centers and they are responsible for collecting and processing the data trajectory of vehicles. Generally, Parabolic Reflector Radars (PRRs) are used in RTS. However, it is possible to use radars with antenna arrays, or Phased Arrays (PAs), so called Phased Arrays Radars (PARs). Thus, the excitation signal of each radiating element of the array can be adjusted to perform electronic control of the radiation pattern in order to improve functionality and maintenance of the system. Therefore, in the implementation and reuse projects of PARs, modeling is subject to various combinations of excitation signals, producing a complex optimization problem due to the large number of available solutions. In this case, it is possible to use offline optimization methods, such as Genetic Algorithms (GAs), to calculate the problem solutions, which are stored for online applications. Hence, the Genetic Algorithm with Maximum-Minimum Crossover (GAMMC) optimization method was used to develop the GAMMC-P algorithm that optimizes the modeling step of radiation pattern control from planar PAs. Compared with a conventional crossover GA, the GAMMC has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, the GAMMC prevents premature convergence, increases population fitness and reduces the processing time. Therefore, the GAMMC-P uses a reconfigurable algorithm with multiple objectives, different coding and genetic operator MMC. The test results show that GAMMC-P reached the proposed requirements for different operating conditions of a planar RAV.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents an analysis of the behavior of some algorithms usually available in stereo correspondence literature, with full HD images (1920x1080 pixels) to establish, within the precision dilemma versus runtime applications which these methods can be better used. The images are obtained by a system composed of a stereo camera coupled to a computer via a capture board. The OpenCV library is used for computer vision operations and processing images involved. The algorithms discussed are an overall method of search for matching blocks with the Sum of the Absolute Value of the difference (Sum of Absolute Differences - SAD), a global technique based on cutting energy graph cuts, and a so-called matching technique semi -global. The criteria for analysis are processing time, the consumption of heap memory and the mean absolute error of disparity maps generated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification.

In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information.

In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data.

Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear.

We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2001 we started as part of the EU FP5 project Greenveins monitoring of insect communities in the normal landscape of Saxony-Anhalt (Germany), which is dominated by agricultural use. We selected four landscape sites of 4x4 km and recorded insects using combined flight traps, combining the ideas of window and yellow pan traps (see Duelli et al., 1999). Traps consist of a yellow funnel (25 cm diameter) filled with water (preserving agent added) and two perspex windows mounted in a way that they are crossed in the center. Within each square km of a site one trap was placed at ecotones between semi-natural habitats and agricultural fields (16 traps per site). Traps were operated in late spring-early summer (three sampling rounds) and late summer (three sampling rounds). Follow-up sampling started in 2010 as long-term monitoring within the TERENO project (www.tereno.net), contributing to the LTER network (Long-Term Ecosystem Research) in Germany (www.lter-d.de) and internationally as well (www.lter-europe.net). Metadata about the sites and related activities and data sets can be found in the DEIMS Repository for Research Sites and Datasets (https://data.lter-europe.net/deims/). In 2010 another two landscapes were added and yearly sampled in the same way. Due to long processing time of trapped insects data of follow-up years will be available about 18 months after trapping.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Companies face new challenges almost every day. In order to stay competitive, it is important that companies strive for continuous development and improvement. By describing companies through their processes it is possible to get a clear overview of the entire operation, which can contribute, to a well-established overall understanding of the company. This is a case study based on Stort AB which is a small logistics company specialized in international transportation and logistics solutions. The purpose of this study is to perform value stream mapping in order to create a more efficient production process and propose possible improvements in order to reduce processing time. After performing value stream mapping, data envelopment analysis is used to calculate how lean Stort AB is today and how lean the company can become by implementing the proposed improvements. The results show that the production process can improve efficiency by minimizing waste produced by a bad workplace layout and over-processing. The authors suggested solution is to introduce standardized processes and invest in technical instruments in order to automate the process to reduce process time. According to data envelopment analysis the business is 41 percent lean at present and may soon become 55 percent lean and finally reach an optimum 100 percent lean mode if the process is automated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

FERNANDES, Fabiano A. N. et al. Optimization of Osmotic Dehydration of Papaya of followed by air-drying. Food Research Internation, v. 39, p. 492-498, 2006.