963 resultados para signal processing in the encrypted domain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The establishment of Export Processing Zones (EPZs) is a strategy for economic development that was introduced almost fifty years ago and is nowadays employed in a large number of countries. While the number of EPZs including several variants such as Special Economic Zone (SEZs) has increased continuously, general interest in EPZs has declined over the years in contrast to earlier heated debates regarding the efficacy of the strategy and its welfare effects especially on women workers. This article re-evaluates the historical trajectories and outstanding labour and gender issues of EPZs on the basis of the experiences of South Korea, Bangladesh and India. The findings suggest the necessity of enlarging our analytical scope with regard to EPZs, which are inextricably connected with external employment structures, whether outside the EPZ but within the same country, or outside the EPZ and its host country altogether.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method to reduce the noise power in far-field pattern without modifying the desired signal is proposed. Therefore, an important signal-to-noise ratio improvement may be achieved. The method is used when the antenna measurement is performed in planar near-field, where the recorded data are assumed to be corrupted with white Gaussian and space-stationary noise, because of the receiver additive noise. Back-propagating the measured field from the scan plane to the antenna under test (AUT) plane, the noise remains white Gaussian and space-stationary, whereas the desired field is theoretically concentrated in the aperture antenna. Thanks to this fact, a spatial filtering may be applied, cancelling the field which is located out of the AUT dimensions and which is only composed by noise. Next, a planar field to far-field transformation is carried out, achieving a great improvement compared to the pattern obtained directly from the measurement. To verify the effectiveness of the method, two examples will be presented using both simulated and measured near-field data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear regression is a technique widely used in digital signal processing. It consists on finding the linear function that better fits a given set of samples. This paper proposes different hardware architectures for the implementation of the linear regression method on FPGAs, specially targeting area restrictive systems. It saves area at the cost of constraining the lengths of the input signal to some fixed values. We have implemented the proposed scheme in an Automatic Modulation Classifier, meeting the hard real-time constraints this kind of systems have.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A review of the main techniques that have been proposed for temporal processing of optical pulses that are the counterpart of the well-known spatial arrangements will be presented. They are translated to the temporal domain via the space-time duality and implemented with electrooptical phase and amplitude modulators and dispersive devices. We will introduce new variations of the conventional approaches and we will focus on their application to optical communications systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this correspondence, the conditions to use any kind of discrete cosine transform (DCT) for multicarrier data transmission are derived. The symmetric convolution-multiplication property of each DCT implies that when symmetric convolution is performed in the time domain, an element-by-element multiplication is performed in the corresponding discrete trigonometric domain. Therefore, appending symmetric redun-dancy (as prefix and suffix) into each data symbol to be transmitted, and also enforcing symmetry for the equivalent channel impulse response, the linear convolution performed in the transmission channel becomes a symmetric convolution in those samples of interest. Furthermore, the channel equalization can be carried out by means of a bank of scalars in the corresponding discrete cosine transform domain. The expressions for obtaining the value of each scalar corresponding to these one-tap per subcarrier equalizers are presented. This study is completed with several computer simulations in mobile broadband wireless communication scenarios, considering the presence of carrier frequency offset (CFO). The obtained results indicate that the proposed systems outperform the standardized ones based on the DFT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: [NiFe] hydrogenases are enzymes that catalyze the oxidation of hydrogen into protons and electrons, to use H2 as energy source, or the production of hydrogen through proton reduction, as an escape valve for the excess of reduction equivalents in anaerobic metabolism. Biosynthesis of [NiFe] hydrogenases is a complex process that occurs in the cytoplasm, where a number of auxiliary proteins are required to synthesize and insert the metal cofactors into the enzyme structural units. The endosymbiotic bacterium Rhizobium leguminosarum requires the products of eighteen genes (hupSLCDEFGHIJKhypABFCDEX) to synthesize an active hydrogenase. hupF and hupK genes are found only in hydrogenase clusters from bacteria expressing hydrogenase in the presence of oxygen. Results: HupF is a HypC paralogue with a similar predicted structure, except for the C-terminal domain present only in HupF. Deletion of hupF results in the inability to process the hydrogenase large subunit HupL, and also in reduced stability of this subunit when cells are exposed to high oxygen tensions. A ?hupF mutant was fully complemented for hydrogenase activity by a C-terminal deletion derivative under symbiotic, ultra low-oxygen tensions, but only partial complementation was observed in free living cells under higher oxygen tensions (1% or 3%). Co-purification experiments using StrepTag-labelled HupF derivatives and mass spectrometry analysis indicate the existence of a major complex involving HupL and HupF, and a less abundant HupF-HupK complex. Conclusions: The results indicate that HupF has a dual role during hydrogenase biosynthesis: it is required for hydrogenase large subunit processing and it also acts as a chaperone to stabilize HupL when hydrogenase is synthesized in the presence of oxygen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Personalized health (p-health) systems can contribute significantly to the sustainability of healthcare systems, though their feasibility is yet to be proven. One of the problems related to their development is the lack of well-established development tools for this domain. As the p-health paradigm is focused on patient self-management, big challenges arise around the design and implementation of patient systems. This paper presents a reference platform created for the development of these applications, and shows the advantages of its adoption in a complex project dealing with cardio-vascular diseases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When users face a certain problem needing a product, service, or action to solve it, selecting the best alternative among them can be a dicult task due to the uncertainty of their quality. This is especially the case in the domains where users do not have an expertise, like for example in Software Engineering. Multiple criteria decision making (MCDM) methods are methods that help making better decisions when facing the complex problem of selecting the best solution among a group of alternatives that can be compared according to different conflicting criteria. In MCDM problems, alternatives represent concrete products, services or actions that will help in achieving a goal, while criteria represent the characteristics of these alternatives that are important for making a decision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to discuss the meaning of five neologisms in the domain of videogames in Spanish: título, aventura, personaje, plataforma, and rol. Our study focuses on a special type of neologism since the Spanish terms we deal with here are not strictly new words; they are what have been called sense neologisms or neosemanticisms, that is, old words taking a new sense in a different domain. These words were identified as new concepts after a process of analysis based on contextual evidence. This study of neology is based on the analysis of a corpus of press articles evaluating videogames published by the Spanish newspaper El País from 1998 to 2008. The analysis of the instances of use of domain specific terms in the corpus revealed that they acquired new senses different to those they have in other domains where they are also used. The paper explains the process of discovering the specialized meaning these words have developed in the domain of videogames and how the analysis of collocational behavior helps in the process of discovering the new sense and in the design of the definition provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The genome of the Gram-negative bacterium Pseudomonas putida harbours a complete set of xcp genes for a type II protein secretion system (T2SS). This study shows that expression of these genes is induced under inorganic phosphate (Pi ) limitation and that the system enables the utilization of various organic phosphate sources. A phosphatase of the PhoX family, previously designated UxpB, was identified, which was produced under low Pi conditions and transported across the cell envelope in an Xcp-dependent manner demonstrating that the xcp genes encode an active T2SS. The signal sequence of UxpB contains a twin-arginine translocation (Tat) motif as well as a lipobox, and both processing by leader peptidase II and Tat dependency were experimentally confirmed. Two different tat gene clusters were detected in the P.?putida genome, of which one, named tat-1, is located adjacent to the uxpB and xcp genes. Both Tat systems appeared to be capable of transporting the UxpB protein. However, expression of the tat-1 genes was strongly induced by low Pi levels, indicating a function of this system in survival during Pi starvation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The genome of the Gram-negative bacterium Pseudomonas putida harbours a complete set of xcp genes for a type II protein secretion system (T2SS). This study shows that expression of these genes is induced under inorganic phosphate (Pi ) limitation and that the system enables the utilization of various organic phosphate sources. A phosphatase of the PhoX family, previously designated UxpB, was identified, which was produced under low Pi conditions and transported across the cell envelope in an Xcp-dependent manner demonstrating that the xcp genes encode an active T2SS. The signal sequence of UxpB contains a twin-arginine translocation (Tat) motif as well as a lipobox, and both processing by leader peptidase II and Tat dependency were experimentally confirmed. Two different tat gene clusters were detected in the P.?putida genome, of which one, named tat-1, is located adjacent to the uxpB and xcp genes. Both Tat systems appeared to be capable of transporting the UxpB protein. However, expression of the tat-1 genes was strongly induced by low Pi levels, indicating a function of this system in survival during Pi starvation.