988 resultados para Processing steps


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past years fruit and vegetable industry has become interested in the application of both osmotic dehydration and vacuum impregnation as mild technologies because of their low temperature and energy requirements. Osmotic dehydration is a partial dewatering process by immersion of cellular tissue in hypertonic solution. The diffusion of water from the vegetable tissue to the solution is usually accompanied by the simultaneous solutes counter-diffusion into the tissue. Vacuum impregnation is a unit operation in which porous products are immersed in a solution and subjected to a two-steps pressure change. The first step (vacuum increase) consists of the reduction of the pressure in a solid-liquid system and the gas in the product pores is expanded, partially flowing out. When the atmospheric pressure is restored (second step), the residual gas in the pores compresses and the external liquid flows into the pores. This unit operation allows introducing specific solutes in the tissue, e.g. antioxidants, pH regulators, preservatives, cryoprotectancts. Fruit and vegetable interact dynamically with the environment and the present study attempts to enhance our understanding on the structural, physico-chemical and metabolic changes of plant tissues upon the application of technological processes (osmotic dehydration and vacuum impregnation), by following a multianalytical approach. Macro (low-frequency nuclear magnetic resonance), micro (light microscopy) and ultrastructural (transmission electron microscopy) measurements combined with textural and differential scanning calorimetry analysis allowed evaluating the effects of individual osmotic dehydration or vacuum impregnation processes on (i) the interaction between air and liquid in real plant tissues, (ii) the plant tissue water state and (iii) the cell compartments. Isothermal calorimetry, respiration and photosynthesis determinations led to investigate the metabolic changes upon the application of osmotic dehydration or vacuum impregnation. The proposed multianalytical approach should enable both better designs of processing technologies and estimations of their effects on tissue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Meprins ? and ?, a subgroup of zinc metalloproteinases belonging to the astacin family, are known to cleave components of the extracellular matrix, either during physiological remodeling or in pathological situations. In this study we present a new role for meprins in matrix assembly, namely the proteolytic processing of procollagens. Both meprins ? and ? release the N- and C-propeptides from procollagen III, with such processing events being critical steps in collagen fibril formation. In addition, both meprins cleave procollagen III at exactly the same site as the procollagen C-proteinases, including bone morphogenetic protein-1 (BMP-1) and other members of the tolloid proteinase family. Indeed, cleavage of procollagen III by meprins is more efficient than by BMP-1. In addition, unlike BMP-1, whose activity is stimulated by procollagen C-proteinase enhancer proteins (PCPEs), the activity of meprins on procollagen III is diminished by PCPE-1. Finally, following our earlier observations of meprin expression by human epidermal keratinocytes, meprin ? is also shown to be expressed by human dermal fibroblasts. In the dermis of fibrotic skin (keloids), expression of meprin ? increases and meprin ? begins to be detected. Our study suggests that meprins could be important players in several remodeling processes involving collagen fiber deposition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two genes with related functions in RNA biogenesis were recently reported in patients with familial ALS: the FUS/TLS gene at the ALS6 locus and the TARDBP/TDP-43 gene at the ALS10 locus [1, 2]. FUS has been implicated to function in several steps of gene expression, including transcription regulation [3], RNA splicing [4, 5], mRNA transport in neurons [6] and, interestingly, in microRNA (miRNA) processing [7]. The goal of this project is to identify the molecular mechanisms leading to the development of FUS mutations-associated ALS. Specifically, we want to test the hypothesis that these FUS mutations misregulate miRNA levels that in turn affect the expression of genes critical for motor neuron survival. In addition we want to test whether misregulation of the miRNA profile is a common feature in ALS. We have performed immunoprecipitations from total extracts of 293T cells expressing FLAG-tagged FUS to characterize its interactome by mass spectrometry. This proteomic study not only revealed a strong interaction of FUS with splicing factors, but shows that FUS might be involved in many, quite different pathways. To map which parts of the FUS protein contribute to the interaction with splicing factors, we have performed a set of experiments with a series of missense and deletion mutants. With this approach, we will not only gain information on the binding partners of FUS along with a map of the required domains for the interactions, but it will also help to unravel whether certain ALS-associated FUS mutations lead to a loss or gain of function due to gain or loss of interactors. Additionally, we have performed quantitative interactomics using SILAC to identify interactome differences of ALS-associated FUS mutants. To this end we have performed immunoprecipitations of total extract from 293T cells, stably transduced with constructs expressing wild-type FUS-FLAG as well as three different ALS-associated mutants (G156E, R244C, P525L). First results indicate striking differences in the interactome with certain RNA binding proteins. We are now validating these candidates in order to reveal the importance of these differential interactions in the context of ALS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The levels of histone mRNA increase 35-fold as selectively detached mitotic CHO cells progress from mitosis through G1 and into S phase. Using an exogenous gene with a histone 3' end which is not sensitive to transcriptional or half-life regulation, we show that 3' processing is regulated as cells progress from G1 to S phase. The half-life of histone mRNA is similar in G1- and S-phase cells, as measured after inhibition of transcription by actinomycin D (dactinomycin) or indirectly after stabilization by the protein synthesis inhibitor cycloheximide. Taken together, these results suggest that the change in histone mRNA levels between G1- and S-phase cells must be due to an increase in the rate of biosynthesis, a combination of changes in transcription rate and processing efficiency. In G2 phase, there is a rapid 35-fold decrease in the histone mRNA concentration which our results suggest is due primarily to an altered stability of histone mRNA. These results are consistent with a model for cell cycle regulation of histone mRNA levels in which the effects on both RNA 3' processing and transcription, rather than alterations in mRNA stability, are the major mechanisms by which low histone mRNA levels are maintained during G1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There exists an interest in performing pin-by-pin calculations coupled with thermal hydraulics so as to improve the accuracy of nuclear reactor analysis. In the framework of the EU NURISP project, INRNE and UPM have generated an experimental version of a few group diffusion cross sections library with discontinuity factors intended for VVER analysis at the pin level with the COBAYA3 code. The transport code APOLLO2 was used to perform the branching calculations. As a first proof of principle the library was created for fresh fuel and covers almost the full parameter space of steady state and transient conditions. The main objective is to test the calculation schemes and post-processing procedures, including multi-pin branching calculations. Two library options are being studied: one based on linear table interpolation and another one using a functional fitting of the cross sections. The libraries generated with APOLLO2 have been tested with the pin-by-pin diffusion model in COBAYA3 including discontinuity factors; first comparing 2D results against the APOLLO2 reference solutions and afterwards using the libraries to compute a 3D assembly problem coupled with a simplified thermal-hydraulic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study addresses the assembly in the chloroplast thylakoid membranes of PsaD, a peripheral membrane protein of the photosystem I complex. Located on the stromal side of the thylakoids, PsaD was found to assemble in vitro into the membranes in its precursor (pre-PsaD) and also in its mature (PsaD) form. Newly assembled unprocessed pre-PsaD was resistant to NaBr and alkaline wash. Yet it was sensitive to proteolytic digestion. In contradistinction, when the assembled precursor was processed, the resulting mature PsaD was resistant to proteases to the same extent as endogenous [correction of endogeneous] PsaD. The accumulation of protease-resistant PsaD in the thylakoids correlated with the increase of mature-PsaD in the membranes. This protection of mature PsaD from proteolysis could not be observed when PsaD was in a soluble form-i.e. not assembled within the thylakoids. The data suggest that pre-PsaD assembles to the membranes and only in a second step processing takes place. The observation that the assembly of pre-PsaD is affected by salts to a much lesser extent than that of mature-PsaD supports a two-step assembly of pre-PsaD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the impact of cascaded reconfigurable optical add-drop multiplexer induced penalties on coherently-detected 28 Gbaud polarization multiplexed m-ary quadrature amplitude modulation (PM m-ary QAM) WDM channels. We investigate the interplay between different higher-order modulation channels and the effect of filter shapes and bandwidth of (de)multiplexers on the transmission performance, in a segment of pan-European optical network with a maximum optical path of 4,560 km (80km x 57 spans). We verify that if the link capacities are assigned assuming that digital back propagation is available, 25% of the network connections fail using electronic dispersion compensation alone. However, majority of such links can indeed be restored by employing single-channel digital back-propagation employing less than 15 steps for the whole link, facilitating practical application of DBP. We report that higher-order channels are most sensitive to nonlinear fiber impairments and filtering effects, however these formats are less prone to ROADM induced penalties due to the reduced maximum number of hops. Furthermore, it has been demonstrated that a minimum filter Gaussian order of 3 and bandwidth of 35 GHz enable negligible excess penalty for any modulation order.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in our ability to watch the molecular and cellular processes of life in action-such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer-raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poly(L-lactide-co-ε-caprolactone) 75:25% mol, P(LL-co-CL), was synthesized via bulk ring-opening polymerisation (ROP) using a novel tin(II)alkoxide initiator, [Sn(Oct)]2DEG, at 130oC for 48 hrs. The effectiveness of this initiator was compared withthe well-known conventional tin(II) octoateinitiator, Sn(Oct)2. The P(LL-co-CL) copolymersobtained were characterized using a combination of analytical technique including: nuclear magnetic resonance spectroscopy (NMR), differential scanning calorimetry (DSC), thermogravimetry (TG) and gel permeation chromatography (GPC). The P(LL-co-CL) was melt-spun into monofilament fibres of uniform diameter and smooth surface appearance. Modification of the matrix morphology was then built into the as-spun fibresvia a series of controlled off-line annealing and hot-drawing steps. © (2014) Trans Tech Publications, Switzerland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The activities of the Institute of Information Technologies in the area of automatic text processing are outlined. Major problems related to different steps of processing are pointed out together with the shortcomings of the existing solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): I.7, I.7.5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A poly(L-lactide-co-caprolactone) copolymer, P(LL-co-CL), of composition 75:25 mol% was synthesized via the bulk ring-opening copolymerization of L-lactide and ε-caprolactone using a novel bis[tin(II) monooctoate] diethylene glycol coordination-insertion initiator, OctSn-OCH2CH2OCH2CH2O-SnOct. The P(LL-co-CL) copolymer obtained was characterized by a combination of analytical techniques, namely nuclear magnetic resonance spectroscopy, gel permeation chromatography, dilute-solution viscometry, differential scanning calorimetry, and thermogravimetric analysis. For processing into a monofilament fiber, the copolymer was melt spun with minimal draw to give a largely amorphous and unoriented as-spun fiber. The fiber's oriented semicrystalline morphology, necessary to give the required balance of mechanical properties, was then developed via a sequence of controlled offline hot-drawing and annealing steps. Depending on the final draw ratio, the fibers obtained had tensile strengths in the region of 200–400 MPa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.