927 resultados para Low cost process


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work reports the application of banana peel as a novel bioadsorbent for in vitro removal of five mycotoxins (aflatoxins (AFB1, AFB2, AFG1, AFG2) and ochratoxin A). The effect of operational parameters including initial pH, adsorbent dose, contact time and temperature were studied in batch adsorption experiments. Scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR) and point of zero charge (pHpzc) analysis were used to characterise the adsorbent material. Aflatoxins’ adsorption equilibrium was achieved in 15 min, with highest adsorption at alkaline pH (6–8), while ochratoxin has not shown any significant adsorption due to surface charge repulsion. The experimental equilibrium data were tested by Langmuir, Freundlich and Hill isotherms. The Langmuir isotherm was found to be the best fitted model for aflatoxins, and the maximum monolayer coverage (Q0) was determined to be 8.4, 9.5, 0.4 and 1.1 ng mg−1 for AFB1, AFB2, AFG1 and AFG2 respectively. Thermodynamic parameters including changes in free energy (ΔG), enthalpy (ΔH) and entropy (ΔS) were determined for the four aflatoxins. Free energy change and enthalpy change demonstrated that the adsorption process was exothermic and spontaneous. Adsorption and desorption study at different pH further demonstrated that the sorption of toxins was strong enough to sustain pH changes that would be experienced in the gastrointestinal tract. This study suggests that biosorption of aflatoxins by dried banana peel may be an effective low-cost decontamination method for incorporation in animal feed diets. © 2016 Informa UK Limited, trading as Taylor & Francis Group.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkittu yritys on suomalainen maaleja ja lakkoja kansainvälisesti valmistava ja myyvä toimija. Yrityksessä otettiin vuonna 2010 käyttöön uudet tuotannon ja toimitusketjun tavoitteet ja suunnitelmat ja tämä tutkimus on osa tuota kokonaisvaltaista kehittämissuuntaa. Tutkimuksessa käsitellään tuotannon ja kunnossapidon tehokkuuden parantamis- ja mittaustyökalu OEE:tä ja tuotevaihtoaikojen pienentämiseen tarkoitettua SMED -työkalua. Työn teoriaosuus perustuu lähinnä akateemisiin julkaisuihin, mutta myös haastatteluihin, kirjoihin, internet sivuihin ja yhteen vuosikertomukseen. Empiriaosuudessa OEE:n käyttöönoton ongelmia ja onnistumista tutkittiin toistettavalla käyttäjäkyselyllä. OEE:n potentiaalia ja käyttöönottoa tutkittiin myös tarkastelemalla tuotanto- ja käytettävyysdataa, jota oli kerätty tuotantolinjalta. SMED:iä tutkittiin siihen perustuvan tietokoneohjelman avulla. SMED:iä tutkittiin teoreettisella tasolla, eikä sitä implementoitu vielä käytäntöön. Tutkimustuloksien mukaan OEE ja SMED sopivat hyvin esimerkkiyritykselle ja niissä on paljon potentiaalia. OEE ei ainoastaan paljasta käytettävyyshäviöiden määrää, mutta myös niiden rakenteen. OEE -tulosten avulla yritys voi suunnata rajalliset tuotannon ja kunnossapidon parantamisen resurssit oikeisiin paikkoihin. Työssä käsiteltävä tuotantolinja ei tuottanut mitään 56 % kaikesta suunnitellusta tuotantoajasta huhtikuussa 2016. Linjan pysähdyksistä ajallisesti 44 % johtui vaihto-, aloitus- tai lopetustöistä. Tuloksista voidaan päätellä, että käytettävyyshäviöt ovat vakava ongelma yrityksen tuotannontehokkuudessa ja vaihtotöiden vähentäminen on tärkeä kehityskohde. Vaihtoaikaa voitaisiin vähentää ~15 % yksinkertaisilla ja halvoilla SMED:illä löydetyillä muutoksilla työjärjestyksessä ja työkaluissa. Parannus olisi vielä suurempi kattavimmilla muutoksilla. SMED:in suurin potentiaali ei välttämättä ole vaihtoaikojen lyhentämisessä vaan niiden standardisoinnissa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tese (doutorado)—Universidade de Brasília, Instituto de Química, Programa de Pós-Graduação em Química, 2015.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work describes preliminary results of a two-modality imaging system aimed at the early detection of breast cancer. The first technique is based on compounding conventional echographic images taken at regular angular intervals around the imaged breast. The other modality obtains tomographic images of propagation velocity using the same circular geometry. For this study, a low-cost prototype has been built. It is based on a pair of opposed 128-element, 3.2 MHz array transducers that are mechanically moved around tissue mimicking phantoms. Compounded images around 360 degrees provide improved resolution, clutter reduction, artifact suppression and reinforce the visualization of internal structures. However, refraction at the skin interface must be corrected for an accurate image compounding process. This is achieved by estimation of the interface geometry followed by computing the internal ray paths. On the other hand, sound velocity tomographic images from time of flight projections have been also obtained. Two reconstruction methods, Filtered Back Projection (FBP) and 2D Ordered Subset Expectation Maximization (2D OSEM), were used as a first attempt towards tomographic reconstruction. These methods yield useable images in short computational times that can be considered as initial estimates in subsequent more complex methods of ultrasound image reconstruction. These images may be effective to differentiate malignant and benign masses and are very promising for breast cancer screening. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cada vez mais, os principais objetivos na indústria é a produção a baixo custo, com a máxima qualidade e com o tempo de fabrico o mais curto possível. Para atingir esta meta, a indústria recorre, frequentemente, às máquinas de comando numérico (CNC), uma vez que com esta tecnologia torna se capaz alcançar uma elevada precisão e um tempo de processamento mais baixo. As máquinas ferramentas CNC podem ser aplicadas em diferentes processos de maquinagem, tais como: torneamento, fresagem, furação, entre outros. De todos estes processos, o mais utilizado é a fresagem devido à sua versatilidade. Utiliza-se normalmente este processo para maquinar materiais metálicos como é o caso do aço e dos ferros fundidos. Neste trabalho, são analisados os efeitos da variação de quatro parâmetros no processo de fresagem (velocidade de corte, velocidade de avanço, penetração radial e penetração axial), individualmente e a interação entre alguns deles, na variação da rugosidade num aço endurecido (aço 12738). Para essa análise são utilizados dois métodos de otimização: o método de Taguchi e o método das superfícies. O primeiro método foi utilizado para diminuir o número de combinações possíveis e, consequentemente, o número de ensaios a realizar é denominado por método de Taguchi. O método das superfícies ou método das superfícies de resposta (RSM) foi utilizado com o intuito de comparar os resultados obtidos com o método de Taguchi, de acordo com alguns trabalhos referidos na bibliografia especializada, o RSM converge mais rapidamente para um valor ótimo. O método de Taguchi é muito conhecido no setor industrial onde é utilizado para o controlo de qualidade. Apresenta conceitos interessantes, tais como robustez e perda de qualidade, sendo bastante útil para identificar variações do sistema de produção, durante o processo industrial, quantificando a variação e permitindo eliminar os fatores indesejáveis. Com este método foi vi construída uma matriz ortogonal L16 e para cada parâmetro foram definidos dois níveis diferentes e realizados dezasseis ensaios. Após cada ensaio, faz-se a medição superficial da rugosidade da peça. Com base nos resultados obtidos das medições da rugosidade é feito um tratamento estatístico dos dados através da análise de variância (Anova) a fim de determinar a influência de cada um dos parâmetros na rugosidade superficial. Verificou-se que a rugosidade mínima medida foi de 1,05m. Neste estudo foi também determinada a contribuição de cada um dos parâmetros de maquinagem e a sua interação. A análise dos valores de “F-ratio” (Anova) revela que os fatores mais importantes são a profundidade de corte radial e da interação entre profundidade de corte radial e profundidade de corte axial para minimizar a rugosidade da superfície. Estes têm contribuições de cerca de 30% e 24%, respetivamente. Numa segunda etapa este mesmo estudo foi realizado pelo método das superfícies, a fim de comparar os resultados por estes dois métodos e verificar qual o melhor método de otimização para minimizar a rugosidade. A metodologia das superfícies de resposta é baseada num conjunto de técnicas matemáticas e estatísticas úteis para modelar e analisar problemas em que a resposta de interesse é influenciada por diversas variáveis e cujo objetivo é otimizar essa resposta. Para este método apenas foram realizados cinco ensaios, ao contrário de Taguchi, uma vez que apenas em cinco ensaios consegue-se valores de rugosidade mais baixos do que a média da rugosidade no método de Taguchi. O valor mais baixo por este método foi de 1,03μm. Assim, conclui-se que RSM é um método de otimização mais adequado do que Taguchi para os ensaios realizados. Foram obtidos melhores resultados num menor número de ensaios, o que implica menos desgaste da ferramenta, menor tempo de processamento e uma redução significativa do material utilizado.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nanopore-based sequencer will open the path to the fourth-generation DNA sequencing technology. The main differences between this technique and the previous ones are: DNA molecule that will be sequenced does not need a previous amplification step, is not necessary any type of specific label both molecular adaptors, and it has been abolished enzymatic process in the nucleotide sequence identification event. These differences have as result a more economic method since don’t spend the necessary reagents for the previous techniques, furthermore it lets to sequence samples with a low DNA concentration. This technique is based in the use of a membrane with a biologic nanopore inserted in it whereby the molecule to analyze (analyte) it made to pass, this membrane is placed between two reservoirs containing ions, when an external volatage is applied in both sides this lead to an ion current through the nanopore. When an analyte cross the nanopore the ion current is modified, that modification in the amplitude and duration of ion current determine the physical and chemical properties of that analyte. By means of subsequent statistical analyzes it can be determined to what sequence own this ion current blockade patterns. More used nanopores are the biologic ones, although they are working to develop synthetic nanopores. The main biologic nanopores are: α-Hemolysin from Staphylococcus aureus (α-HL), Mycobacterium smegmatis porin A (MspA) and bacteriophage phi29 pore (phi29). Α-HL and MspA have in their narrowest point a diameter similar to nucleotide size, they are functional at high temperature both wide range of pH (2-12) but MspA is able to read four nucleotide at the same time while α- HL just can read one by one. Finally, phi29 present a bigger diameter what let to get information about DNA spatial conformation and their interaction with proteins (Feng et al., 2015). Nowaday Oxford Nanopore Technologies (ONT) is the only company which has developed Nanopore technology; they have two devices available to sequencing (PromethION and MinION). The MinION is a single-use DNA sequencing device with the size of a USB memory with a total of 3000 nanopores that can sequence until 200kb. The PrometheION is big size sequencer that own 48 different cells, what let to sequence different samples at the same time, with a total of 144.000 nanopores and reading of several megabases (https://www.nanoporetech.com/). The high processivity and low cost become this technique in a great option to massive- sequencing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Magnesium (Mg) battery is considered as a promising candidate for the next generation battery technology that could potentially replace the current lithium (Li)-ion batteries due to the following factors. Magnesium possesses a higher volumetric capacity than commercialized Li-ion battery anode materials. Additionally, the low cost and high abundance of Mg compared to Li makes Mg batteries even more attractive. Moreover, unlike metallic Li anodes which have a tendency to develop a dendritic structure on the surface upon the cycling of the battery, Mg metal is known to be free from such a hazardous phenomenon. Due to these merits of Mg as an anode, the topic of rechargea¬ble Mg batteries has attracted considerable attention among researchers in the last few decades. However, the aforementioned advantages of Mg batteries have not been fully utilized due to the serious kinetic limitation of Mg2+ diffusion process in many hosting compounds which is believed to be due to a strong electrostatic interaction between divalent Mg2+ ions and hosting matrix. This serious kinetic hindrance is directly related to the lack of cathode materials for Mg battery that provide comparable electrochemical performances to that of Li-based system. Manganese oxide (MnO2) is one of the most well studied electrode materials due to its excellent electrochemical properties, including high Li+ ion capacity and relatively high operating voltage (i.e., ~ 4 V vs. Li/Li+ for LiMn2O4 and ~ 3.2 V vs. Mg/Mg2+). However, unlike the good electrochemical properties of MnO2 realized in Li-based systems, rather poor electrochemical performances have been reported in Mg based systems, particularly with low capacity and poor cycling performances. While the origin of the observed poor performances is believed to be due to the aforementioned strong ionic interaction between the Mg2+ ions and MnO2 lattice resulting in a limited diffusion of Mg2+ ions in MnO2, very little has been explored regarding the charge storage mechanism of MnO2 with divalent Mg2+ ions. This dissertation investigates the charge storage mechanism of MnO2, focusing on the insertion behaviors of divalent Mg2+ ions and exploring the origins of the limited Mg2+ insertion behavior in MnO2. It is found that the limited Mg2+ capacity in MnO2 can be significantly improved by introducing water molecules in the Mg electrolyte system, where the water molecules effectively mitigated the kinetic hindrance of Mg2+ insertion process. The combination of nanostructured MnO2 electrode and water effect provides a synergic effect demonstrating further enhanced Mg2+ insertion capability. Furthermore, it is demonstrated in this study that pre-cycling MnO2 electrodes in water-containing electrolyte activates MnO2 electrode, after which improved Mg2+ capacity is maintained in dry Mg electrolyte. Based on a series of XPS analysis, a conversion mechanism is proposed where magnesiated MnO2 undergoes a conversion reaction to Mg(OH)2 and MnOx and Mn(OH)y species in the presence of water molecules. This conversion process is believed to be the driving force that generates the improved Mg2+ capacity in MnO2 along with the water molecule’s charge screening effect. Finally, it is discussed that upon a consecutive cycling of MnO2 in the water-containing Mg electrolyte, structural water is generated within the MnO2 lattice, which is thought to be the origin of the observed activation phenomenon. The results provided in this dissertation highlight that the divalency of Mg2+ ions result in very different electrochemical behaviors than those of the well-studied monovalent Li+ ions towards MnO2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development and study of detectors sensitive to flammable combustible and toxic gases at low cost is a crucial technology challenge to enable marketable versions to the market in general. Solid state sensors are attractive for commercial purposes by the strength and lifetime, because it isn t consumed in the reaction with the gas. In parallel, the use of synthesis techniques more viable for the applicability on an industrial scale are more attractive to produce commercial products. In this context ceramics with spinel structure were obtained by microwave-assisted combustion for application to flammable fuel gas detectors. Additionally, alternatives organic-reducers were employed to study the influence of those in the synthesis process and the differences in performance and properties of the powders obtained. The organic- reducers were characterized by Thermogravimetry (TG) and Derivative Thermogravimetry (DTG). After synthesis, the samples were heat treated and characterized by Fourier Transform Infrared Spectroscopy (FTIR), X-ray Diffraction (XRD), analysis by specific area by BET Method and Scanning Electron Microscopy (SEM). Quantification of phases and structural parameters were carried through Rietveld method. The methodology was effective to obtain Ni-Mn mixed oxides. The fuels influenced in obtaining spinel phase and morphology of the samples, however samples calcined at 950 °C there is just the spinel phase in the material regardless of the organic-reducer. Therefore, differences in performance are expected in technological applications when sample equal in phase but with different morphologies are tested

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The destructive impact of improper disposal of heavy metals in the environment increases as a direct result of population explosion, urbanization and industrial expansion and technological developments. Argil are potential materials for adsorption of inorganic and the pelletization of it is required for use in adsorptive columns of fixed bed. The low cost and the possibility of regeneration makes these materials attractive for use in the purification process, capable of removing inorganic compounds in contaminated aquatic environments. In this work was made pellets of a mixture of dolomite and montmorillonite by wet agglomeration, in different percentages. The removal of Pb (II) was investigated through experimental studies, and was modeled by kinetic models and isotherms of adsorption. The materials were characterized using the techniques of XRD, TG / DTA, FT-IR, and surface area by BET method. The results showed the adsorption efficiency of the contaminant by the composite material studied in synthetic solution. The study found that the adsorption follows the Langmuir model, and the kinetics of adsorption follows the model of pseudosecond order

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sub-wavelength diameter holes in thin metal layers can exhibit remarkable optical features that make them highly suitable for (bio)sensing applications. Either as efficient light scattering centers for surface plasmon excitation or metal-clad optical waveguides, they are able to form strongly localized optical fields that can effectively interact with biomolecules and/or nanoparticles on the nanoscale. As the metal of choice, aluminum exhibits good optical and electrical properties, is easy to manufacture and process and, unlike gold and silver, its low cost makes it very promising for commercial applications. However, aluminum has been scarcely used for biosensing purposes due to corrosion and pitting issues. In this short review, we show our recent achievements on aluminum nanohole platforms for (bio)sensing. These include a method to circumvent aluminum degradation—which has been successfully applied to the demonstration of aluminum nanohole array (NHA) immunosensors based on both, glass and polycarbonate compact discs supports—the use of aluminum nanoholes operating as optical waveguides for synthesizing submicron-sized molecularly imprinted polymers by local photopolymerization, and a technique for fabricating transferable aluminum NHAs onto flexible pressure-sensitive adhesive tapes, which could facilitate the development of a wearable technology based on aluminum NHAs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The assessment of building thermal performance is often carried out using HVAC energy consumption data, when available, or thermal comfort variables measurements, for free-running buildings. Both types of data can be determined by monitoring or computer simulation. The assessment based on thermal comfort variables is the most complex because it depends on the determination of the thermal comfort zone. For these reasons, this master thesis explores methods of building thermal performance assessment using variables of thermal comfort simulated by DesignBuilder software. The main objective is to contribute to the development of methods to support architectural decisions during the design process, and energy and sustainable rating systems. The research method consists on selecting thermal comfort methods, modeling them in electronic sheets with output charts developed to optimize the analyses, which are used to assess the simulation results of low cost house configurations. The house models consist in a base case, which are already built, and changes in thermal transmittance, absorptance, and shading. The simulation results are assessed using each thermal comfort method, to identify the sensitivity of them. The final results show the limitations of the methods, the importance of a method that considers thermal radiance and wind speed, and the contribution of the chart proposed