991 resultados para Processing Element Array
Resumo:
This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.
Resumo:
The alveolated structure of the pulmonary acinus plays a vital role in gas exchange function. Three-dimensional (3D) analysis of the parenchymal region is fundamental to understanding this structure-function relationship, but only a limited number of attempts have been conducted in the past because of technical limitations. In this study, we developed a new image processing methodology based on finite element (FE) analysis for accurate 3D structural reconstruction of the gas exchange regions of the lung. Stereologically well characterized rat lung samples (Pediatr Res 53: 72-80, 2003) were imaged using high-resolution synchrotron radiation-based X-ray tomographic microscopy. A stack of 1,024 images (each slice: 1024 x 1024 pixels) with resolution of 1.4 mum(3) per voxel were generated. For the development of FE algorithm, regions of interest (ROI), containing approximately 7.5 million voxels, were further extracted as a working subunit. 3D FEs were created overlaying the voxel map using a grid-based hexahedral algorithm. A proper threshold value for appropriate segmentation was iteratively determined to match the calculated volume density of tissue to the stereologically determined value (Pediatr Res 53: 72-80, 2003). The resulting 3D FEs are ready to be used for 3D structural analysis as well as for subsequent FE computational analyses like fluid dynamics and skeletonization.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
The 3' cleavage generating non-polyadenylated animal histone mRNAs depends on the base pairing between U7 snRNA and a conserved histone pre-mRNA downstream element. This interaction is enhanced by a 100 kDa zinc finger protein (ZFP100) that forms a bridge between an RNA hairpin element upstream of the processing site and the U7 small nuclear ribonucleoprotein (snRNP). The N-terminus of Lsm11, a U7-specific Sm-like protein, was shown to be crucial for histone RNA processing and to bind ZFP100. By further analysing these two functions of Lsm11, we find that Lsm11 and ZFP100 can undergo two interactions, i.e. between the Lsm11 N-terminus and the zinc finger repeats of ZFP100, and between the N-terminus of ZFP100 and the Sm domain of Lsm11, respectively. Both interactions are not specific for the two proteins in vitro, but the second interaction is sufficient for a specific recognition of the U7 snRNP by ZFP100 in cell extracts. Furthermore, clustered point mutations in three phylogenetically conserved regions of the Lsm11 N-terminus impair or abolish histone RNA processing. As these mutations have no effect on the two interactions with ZFP100, these protein regions must play other roles in histone RNA processing, e.g. by contacting the pre-mRNA or additional processing factors.
Resumo:
We report a trace element - Pb isotope analytical (LIA) database on the "Singen Copper", a peculiar type of copper found in the North Alpine realm, from its type locality, the Early Bronze Age Singen Cemetery (Germany). What distinguishes “Singen Copper” from other coeval copper types? (i) is it a discrete metal lot with a uniform provenance (if so, can its provenance be constrained)? (ii) was it manufactured by a special, unique metallurgical process that can be discriminated from others? Trace element concentrations can give clues on the ore types that were mined, but they can be modified (more or less intentionally) by metallurgical operations. A more robust indicator are the ratios of chemically similar elements (e.g. Co/Ni, Bi/Sb, etc.), since they should remain nearly constant during metallurgical operations, and are expected to behave homogeneously in each mineral of a given mining area, but their partition amongst the different mineral species is known to cause strong inter-element fractionations. We tested the trace element ratio pattern predicted by geochemical arguments on the Brixlegg mining area. Brixlegg itself is not compatible with the Singen Copper objects, and we only report it because it is a rare instance of a mining area for which sufficient trace element analyses are available in the literature. We observe that As/Sb in fahlerz varies by a factor 1.8 above/below median; As/Sb in enargite varies by a factor of 2.5 with a 10 times higher median. Most of the 102 analyzed metal objects from Singen are Sb-Ni-rich, corresponding to “antimony-nickel copper” of the literature. Other trace element concentrations vary by > 100 times, ratios by factors > 50. Pb isotopic compositions are all significantly different from each other. They do not form a single linear array and require > 3 ore batches that certainly do not derive from one single mining area. Our data suggest a heterogeneous provenance of “Singen copper”. Archaeological information limits the scope to Central European sources. LIA requires a diverse supply network from many mining localities, including possibly Brittany. Trace element ratios show more heterogeneity than LIA; this can be explained either by deliberate selection of one particular ore mineral (from very many sources) or by processing of assorted ore minerals from a smaller number of sources, with the unintentional effect that the quality of the copper would not be constant, as the metallurgical properties of alloys would vary with trace element concentrations.
Resumo:
Histone pre-mRNA 3' processing is controlled by a hairpin element preceding the processing site that interacts with a hairpin-binding protein (HBP) and a downstream spacer element that serves as anchoring site for the U7 snRNP. In addition, the nucleotides following the hairpin and surrounding the processing site (ACCCA'CA) are conserved among vertebrate histone genes. Single to triple nucleotide mutations of this sequence were tested for their ability to be processed in nuclear extract from animal cells. Changing the first four nucleotides had no qualitative and little if any quantitative effects on histone RNA 3' processing in mouse K21 cell extract, where processing of this gene is virtually independent of the HBP. A gel mobility shift assay revealing HBP interactions and a processing assay in HeLa cell extract (where the contribution of HBP to efficient processing is more important) showed that only one of these mutations, predicted to extend the hairpin by one base pair, affected the interaction with HBP. Mutations in the next three nucleotides affected both the cleavage efficiency and the choice of processing sites. Analysis of these novel sites indicated a preference for the nucleotide 5' of the cleavage site in the order A > C > U > G. Moreover, a guanosine in the 3' position inhibited cleavage. The preference for an A is shared with the cleavage/polyadenylation reaction, but the preference order for the other nucleotides is different [Chen F, MacDonald CC, Wilusz J, 1995, Nucleic Acids Res 23:2614-2620].
Resumo:
We have analysed the extent of base-pairing interactions between spacer sequences of histone pre-mRNA and U7 snRNA present in the trans-acting U7 snRNP and their importance for histone RNA 3' end processing in vitro. For the efficiently processed mouse H4-12 gene, a computer analysis revealed that additional base pairs could be formed with U7 RNA outside of the previously recognised spacer element (stem II). One complementarity (stem III) is located more 3' and involves nucleotides from the very 5' end of U7 RNA. The other, more 5' located complementarity (stem I) involves nucleotides of the Sm binding site of U7 RNA, a part known to interact with snRNP structural proteins. These potential stem structures are separated from each other by short internal loops of unpaired nucleotides. Mutational analyses of the pre-mRNA indicate that stems II and III are equally important for interaction with the U7 snRNP and for processing, whereas mutations in stem I have moderate effects on processing efficiency, but do not impair complex formation with the U7 snRNP. Thus nucleotides near the processing site may be important for processing, but do not contribute to the assembly of an active complex by forming a stem I structure. The importance of stem III was confirmed by the ability of a complementary mutation in U7 RNA to suppress a stem III mutation in a complementation assay using Xenopus laevis oocytes. The main role of the factor(s) binding to the upstream hairpin loop is to stabilise the U7-pre-mRNA complex. This was shown by either stabilising (by mutation) or destabilising (by increased temperature) the U7-pre-mRNA base-pairing under conditions where hairpin factor binding was either allowed or prevented (by mutation or competition). The hairpin dependence of processing was found to be inversely related to the strength of the U7-pre-mRNA interaction.
Resumo:
Magnetic resonance imaging (MRI) is a non-invasive technique that offers excellent soft tissue contrast for characterizing soft tissue pathologies. Diffusion tensor imaging (DTI) is an MRI technique that has shown to have the sensitivity to detect subtle pathology that is not evident on conventional MRI. ^ Rats are commonly used as animal models in characterizing the spinal cord pathologies including spinal cord injury (SCI), cancer, multiple sclerosis, etc. These pathologies could affect both thoracic and cervical regions and complete characterization of these pathologies using MRI requires DTI characterization in both the thoracic and cervical regions. Prior to the application of DTI for investigating the pathologic changes in the spinal cord, it is essential to establish DTI metrics in normal animals. ^ To date, in-vivo DTI studies of rat spinal cord have used implantable coils for high signal-to-noise ratio (SNR) and spin-echo pulse sequences for reduced geometric distortions. Implantable coils have several disadvantages including: (1) the invasive nature of implantation, (2) loss of SNR due to frequency shift with time in the longitudinal studies, and (3) difficulty in imaging the cervical region. While echo planar imaging (EPI) offers much shorter acquisition times compared to spin-echo imaging, EPI is very sensitive to static magnetic field inhomogeneities and the existing shimming techniques implemented on the MRI scanner do not perform well on spinal cord because of its geometry. ^ In this work, an integrated approach has been implemented for in-vivo DTI characterization of rat spinal cord in the thoracic and cervical regions. A three element phased array coil was developed for improved SNR and extended spatial coverage. A field-map shimming technique was developed for minimizing the geometric distortions in EPI images. Using these techniques, EPI based DWI images were acquired with optimized diffusion encoding scheme from 6 normal rats and the DTI-derived metrics were quantified. ^ The phantom studies indicated higher SNR and smaller bias in the estimated DTI metrics than the previous studies in the cervical region. In-vivo results indicated no statistical difference in the DTI characteristics of either gray matter or white matter between the thoracic and cervical regions. ^
Resumo:
Ocean Drilling Program Hole 923A, located on the western flank of the Mid-Atlantic Ridge south of the Kane Fracture Zone, recovered primitive gabbros that have mineral trace element compositions inconsistent with growth from a single parental melt. Plagioclase crystals commonly show embayed anorthitic cores overgrown by more albitic rims. Ion probe analyses of plagioclase cores and rims show consistent differences in trace element ratios, indicating variation in the trace element characteristics of their respective parental melts. This requires the existence of at least two distinct melt compositions within the crust during the generation of these gabbros. Melt compositions calculated to be parental to plagioclase cores are depleted in light rare earth elements, but enriched in yttrium, compared to basalts from this region of the Mid-Atlantic Ridge, which are normal mid-ocean ridge basalt (N-MORB). Clinopyroxene trace element compositions are similar to those predicted to be in equilibrium with N-MORB. However, primitive clinopyroxene crystals are much more magnesian than those produced in one-atmosphere experiments on N-MORB, suggesting that the major element composition of the melt was unlike N-MORB. These data require that the diverse array of melt compositions generated within the mantle beneath mid-ocean ridges are not always fully homogenised during melt extraction from the mantle and that the final stage of mixing can occur efficiently within crustal magma chambers. This has implications for the process of melt extraction from the mantle and the liquid line of descent of MORB
Resumo:
Formation of the Cretaceous Caribbean plateau, including the komatiites of Gorgona, has been linked to the currently active Galápagos hotspot. We use Hf-Nd isotopes and trace element data to characterise both the Caribbean plateau and the Galápagos hotspot, and to investigate the relationship between them. Four geochemical components are identified in the Galápagos mantle plume: two 'enriched' components with epsilon-Hf and epsilon-Nd similar to enriched components observed in other mantle plumes, one moderately enriched component with high Nb/Y, and a fourth component which most likely represents depleted MORB source mantle. The Caribbean plateau basalt data form a linear array in Hf-Nd isotope space, consistent with mixing between two mantle components. Combined Hf-Nd-Pb-Sr-He isotope and trace element data from this study and the literature suggest that the more enriched Caribbean end member corresponds to one or both of the enriched components identified on Galápagos. Likewise, the depleted end member of the array is geochemically indistinguishable from MORB and corresponds to the depleted component of the Galápagos system. Enriched basalts from Gorgona partially overlap with the Caribbean plateau array in epsilon-Hf vs. epsilon-Nd, whereas depleted basalts, picrites and komatiites from Gorgona have a high epsilon-Hf for a given epsilon-Nd, defining a high-epsilon-Hf depleted end member that is not observed elsewhere within the Caribbean plateau sequences. This component is similar, however, in terms of Hf-Nd-Pb-He isotopes and trace elements to the depleted plume component recognised in basalts from Iceland and along the Reykjanes Ridge. We suggest that the Caribbean plateau represents the initial outpourings of the ancestral Galápagos plume. Absence of a moderately enriched, high Nb/Y component in the older Caribbean plateau (but found today on the island of Floreana) is either due to changing source compositions of the plume over its 90 Ma history, or is an artifact of limited sampling. The high-epsilon-Hf depleted component sampled by the Gorgona komatiites and depleted basalts is unique to Gorgona and is not found in the Caribbean plateau. This may be an indication of the scale of heterogeneity of the Caribbean plateau system; alternatively Gorgona may represent a separate oceanic plateau derived from a completely different Pacific plume, such as the Sala y Gomez.
Resumo:
Sr and Nd isotopic composition of 23 basalts from Sites 556-559 and 561-564. are reported. The 87Sr/86Sr ratios in fresh glasses and leached whole rocks range from 0.7025 to 0.7034 and are negatively correlated with the initial 143Nd/ 144Nd compositions, which range from 0.51315 to 0.51289. The Sr and Nd isotopic compositions (in glasses or leached samples) lie within the fields of mid-ocean ridge basalts (MORB) and ocean island basalts (OIB) from the Azores on the Nd-Sr mantle array/fan plot. In general, there is a correlation between the trace element characteristics and the 143Nd/144Nd composition (i.e., samples with Hf/Ta>7 and (Ce/Sm)N<1 [normal-MORB] have initial 143Nd/144Nd>0.51307, whereas samples with Hf/Ta<7 and (Ce/Sm)N>1 (enriched-MORB) have initial 143Nd/144Nd compositions <0.51300). A significant deviation from this general rule is found in Hole 558, where the N-MORB can have, within experimental limits, identical isotopic compositions to those found in associated E-MORB. The plume-depleted asthenosphere mixing hypothesis of Schilling (1975), White and Schilling (1978) and Schilling et al. (1977) provides a framework within which the present data can be evaluated. Given the distribution and possible origins of the chemical and isotopic heterogeneity observed in Leg 82 basalts, and some other basalts in the area, it would appear that the Schilling et al. model is not entirely satisfactory. In particular, it can be shown that trace element data may incorrectly estimate the plume component and more localized mantle heterogeneity (both chemical and isotopic) may be important.
Resumo:
The 50 km-long West Valley segment of the northern Juan de Fuca Ridge is a young, extension-dominated spreading centre, with volcanic activity concentrated in its southern half. A suite of basalts dredged from the West Valley floor, the adjacent Heck Seamount chain, and a small near-axis cone here named Southwest Seamount, includes a spectrum of geochemical compositions ranging from highly depleted normal (N-) MORB to enriched (E-) MORB. Heck Seamount lavas have chondrite-normalized La/Sm en -0.3, 87Sr/86Sr = 0.70235 - 0.70242, and 206Pb/204Pb = 18.22 - 18.44, requiring a source which is highly depleted in trace elements both at the time of melt generation and over geologic time. The E-MORB from Southwest Seamount have La/Sm en -1.8, 87Sr/86Sr = 0.70245 - 0.70260, and 206Pb/204Pb = 18.73 - 19.15, indicating a more enriched source. Basalts from the West Valley floor have chemical compositions intermediate between these two end-members. As a group, West Valley basalts from a two-component mixing array in element-element and element-isotope plots which is best explained by magma mixing. Evidence for crustal-level magma mixing in some basalts includes mineral-melt chemical and isotopic disequilibrium, but mixing of melts at depth (within the mantle) may also occur. The mantle beneath the northern Juan de Fuca Ridge is modelled as a plum-pudding, with "plums" of enriched, amphibole-bearing peridotite floating in a depleted matrix (DM). Low degrees of melting preferentially melt the "plums", initially removing only the amphibole component and producing alkaline to transitional E-MORB. Higher degrees of melting tap both the "plums" and the depleted matrix to yield N-MORB. The subtly different isotopic compositions of the E-MORBs compared to the N-MORBs require that any enriched component in the upper mantle was derived from a depleted source. If the enriched component crystallized from fluids with a DM source, the "plums" could evolve to their more evolved isotopic composition after a period of 1.5-2.0 Ga. Alternatively, the enriched component could have formed recently from fluids with a lessdepleted source than DM, such as subducted oceanic crust. A third possibility is that enriched material might be dispersed as "plums" throughout the upper mantle, transported from depth by mantle plumes.
Resumo:
This paper presents a general systems that can be taken into account to control between elements in an antenna array. Because the digital phase shifter devices have become a strategic element and also some steps have been taken for their export by U.S. Government, this element has increased its price to the low supply in the market. Therefore, it is necessary to adopt some solutions that allow us to deal with the design and construction of antenna arrays. system based on a group of a staggered phase shift with external switching is shown, which is extrapolated array.
Resumo:
A multibeam antenna study based on Butler network will be undertaken in this document. These antenna designs combines phase shift systems with multibeam networks to optimize multiple channel systems. The system will work at 1.7 GHz with circular polarization. Specifically, result simulations and measurements of 3 element triangular subarray will be shown. A 45 element triangular array will be formed by the subarrays. Using triangular subarrays, side lobes and crossing points are reduced.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.