974 resultados para Target Held Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In practical forensic casework, backspatter recovered from shooters' hands can be an indicator of self-inflicted gunshot wounds to the head. In such cases, backspatter retrieved from inside the barrel indicates that the weapon found at the death scene was involved in causing the injury to the head. However, systematic research on the aspects conditioning presence, amount and specific patterns of backspatter is lacking so far. Herein, a new concept of backspatter investigation is presented, comprising staining technique, weapon and target medium: the 'triple contrast method' was developed, tested and is introduced for experimental backspatter analysis. First, mixtures of various proportions of acrylic paint for optical detection, barium sulphate for radiocontrast imaging in computed tomography and fresh human blood for PCR-based DNA profiling were generated (triple mixture) and tested for DNA quantification and short tandem repeat (STR) typing success. All tested mixtures yielded sufficient DNA that produced full STR profiles suitable for forensic identification. Then, for backspatter analysis, sealed foil bags containing the triple mixture were attached to plastic bottles filled with 10 % ballistic gelatine and covered by a 2-3-mm layer of silicone. To simulate backspatter, close contact shots were fired at these models. Endoscopy of the barrel inside revealed coloured backspatter containing typable DNA and radiographic imaging showed a contrasted bullet path in the gelatine. Cross sections of the gelatine core exhibited cracks and fissures stained by the acrylic paint facilitating wound ballistic analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major component of minimally invasive cochlear implantation is atraumatic scala tympani (ST) placement of the electrode array. This work reports on a semiautomatic planning paradigm that uses anatomical landmarks and cochlear surface models for cochleostomy target and insertion trajectory computation. The method was validated in a human whole head cadaver model (n = 10 ears). Cochleostomy targets were generated from an automated script and used for consecutive planning of a direct cochlear access (DCA) drill trajectory from the mastoid surface to the inner ear. An image-guided robotic system was used to perform both, DCA and cochleostomy drilling. Nine of 10 implanted specimens showed complete ST placement. One case of scala vestibuli insertion occurred due to a registration/drilling error of 0.79 mm. The presented approach indicates that a safe cochleostomy target and insertion trajectory can be planned using conventional clinical imaging modalities, which lack sufficient resolution to identify the basilar membrane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As our input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We include a novel approach to resample the input into regularly sampled 3D light fields by aligning them in the spatio-temporal domain, and a technique for high-quality disparity estimation from light fields. We show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perceived duration is assumed to be positively related to nontemporal stimulus magnitude. Most recently, the finding that larger stimuli are perceived to last longer has been challenged to represent a mere decisional bias induced by the use of comparative duration judgments. Therefore, in the present study, the method of temporal reproduction was applied as a psychophysical procedure to quantify perceived duration. Another major goal was to investigate the influence of attention on the effect of visual stimulus size on perceived duration. For this purpose, an additional dual-task paradigm was employed. Our results not only converged with previous findings in demonstrating a functional positive relationship between nontemporal stimulus size and perceived duration, but also showed that the effect of stimulus size on perceived duration was not confined to comparative duration judgments. Furthermore, the effect of stimulus size proved to be independent of attentional resources allocated to stimulus size; nontemporal visual stimulus information does not need to be processed intentionally to influence perceived duration. Finally, the effect of nontemporal stimulus size on perceived duration was effectively modulated by the duration of the target intervals, suggesting a hitherto largely unrecognized role of temporal context for the effect of nontemporal stimulus size to become evident.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The three articles that comprise this dissertation describe how small area estimation and geographic information systems (GIS) technologies can be integrated to provide useful information about the number of uninsured and where they are located. Comprehensive data about the numbers and characteristics of the uninsured are typically only available from surveys. Utilization and administrative data are poor proxies from which to develop this information. Those who cannot access services are unlikely to be fully captured, either by health care provider utilization data or by state and local administrative data. In the absence of direct measures, a well-developed estimation of the local uninsured count or rate can prove valuable when assessing the unmet health service needs of this population. However, the fact that these are “estimates” increases the chances that results will be rejected or, at best, treated with suspicion. The visual impact and spatial analysis capabilities afforded by geographic information systems (GIS) technology can strengthen the likelihood of acceptance of area estimates by those most likely to benefit from the information, including health planners and policy makers. ^ The first article describes how uninsured estimates are currently being performed in the Houston metropolitan region. It details the synthetic model used to calculate numbers and percentages of uninsured, and how the resulting estimates are integrated into a GIS. The second article compares the estimation method of the first article with one currently used by the Texas State Data Center to estimate numbers of uninsured for all Texas counties. Estimates are developed for census tracts in Harris County, using both models with the same data sets. The results are statistically compared. The third article describes a new, revised synthetic method that is being tested to provide uninsured estimates at sub-county levels for eight counties in the Houston metropolitan area. It is being designed to replicate the same categorical results provided by a current U.S. Census Bureau estimation method. The estimates calculated by this revised model are compared to the most recent U.S. Census Bureau estimates, using the same areas and population categories. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I have developed a novel approach to test for toxic organic substances adsorbed onto ultra fine particulate particles present in the ambient air in Northeast Houston, Texas. These particles are predominantly carbon soot with an aerodynamic diameter (AD) of <2.5 μm. If present in the ambient air, many of the organic substances will be absorbed to the surface of the particles (which act just like a charcoal air filter), and may be adducted into the respiratory system. Once imbedded into the lungs these particles may release the adsorbed toxic organic substances with serious health consequences. I used a Airmetrics portable Minivol air sampler time drawing the ambient air through collection filters samples from 6 separate sites in Northeast Houston, an area known for high ambient PM 2.5 released from chemical plants and other sources (e.g. vehicle emissions).(1) In practice, the mass of the collected particles were much less than the mass of the filters. My technique was designed to release the adsorbed organic substances on the fine carbon particles by heating the filter samples that included the PM 2.5 particles prior to identification by gas chromatography/mass spectrometry (GCMS). The results showed negligible amounts of target chemicals from the collection filters. However, the filters alone released organic substances and GCMS could not distinguish between the organic substances released from the soot particles from those released from the heated filter fabric. However, an efficacy tests of my method using two wax burning candles that released soot revealed high levels of benzene. This suggests that my method has the potential to reveal the organic substances adsorbed onto the PM 2.5 for analysis. In order to achieve this goal, I must refine the particle collection process which would be independent of the filters; the filters upon heating also release organic substances obscuring the contribution from the soot particles. To obtain pure soot particles I will have to filter more air so that the soot particles can be shaken off the filters and then analyzed by my new technique. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supermarket nutrient movement, a community food consumption measure, aggregated 1,023 high-fat foods, representing 100% of visible fats and approximately 44% of hidden fats in the food supply (FAO, 1980). Fatty acid and cholesterol content of foods shipped from the warehouse to 47 supermarkets located in the Houston area were calculated over a 6 month period. These stores were located in census tracts with over 50% of a given ethnicity: Hispanic, black non-Hispanic, or white non-Hispanic. Categorizing the supermarket census tracts by predominant ethnicity, significant differences were found by ANOVA in the proportion of specific fatty acids and cholesterol content of the foods examined. Using ecological regression, ethnicity, income, and median age predicted supermarket lipid movements while residential stability did not. No associations were found between lipid movements and cardiovascular disease mortality, making further validation necessary for epidemiological application of this method. However, it has been shown to be a non-reactive and cost-effective method appropriate for tracking target foods in populations of groups, and for assessing the impact of mass media nutrition education, legislation, and fortification on community food and nutrient purchase patterns. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite having been identified over thirty years ago and definitively established as having a critical role in driving tumor growth and predicting for resistance to therapy, the KRAS oncogene remains a target in cancer for which there is no effective treatment. KRas is activated b y mutations at a few sites, primarily amino acid substitutions at codon 12 which promote a constitutively active state. I have found that different amino acid substitutions at codon 12 can activate different KRas downstream signaling pathways, determine clonogenic growth potential and determine patient response to molecularly targeted therapies. Computer modeling of the KRas structure shows that different amino acids substituted at the codon 12 position influences how KRas interacts with its effecters. In the absence of a direct inhibitor of mutant KRas several agents have recently entered clinical trials alone and in combination directly targeting two of the common downstream effecter pathways of KRas, namely the Mapk pathway and the Akt pathway. These inhibitors were evaluated for efficacy against different KRAS activating mutations. An isogenic panel of colorectal cells with wild type KRas replaced with KRas G12C, G12D, or G12V at the endogenous loci differed in sensitivity to Mek and Akt inhibition. In contrast, screening was performed in a broad panel of lung cell lines alone and no correlation was seen between types of activating KRAS mutation due to concurrent oncogenic lesions. To find a new method to inhibit KRAS driven tumors, siRNA screens were performed in isogenic lines with and without active KRas. The knockdown of CNKSR1 (CNK1) showed selective growth inhibition in cells with an oncogenic KRAS. The deletion of CNK1 reduces expression of mitotic cell cycle proteins and arrests cells with active KRas in the G1 phase of the cell cycle similar to the deletion of an activated KRas regardless of activating substitution. CNK1 has a PH domain responsible for localizing it to membrane lipids making KRas potentially amenable to inhibition with small molecules. The work has identified a series of small molecules capable of binding to this PH domain and inhibiting CNK1 facilitated KRas signaling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se presenta un nuevo método de diseño conceptual en Ingeniería Aeronáutica basado el uso de modelos reducidos, también llamados modelos sustitutos (‘surrogates’). Los ingredientes de la función objetivo se calculan para cada indiviudo mediante la utilización de modelos sustitutos asociados a las distintas disciplinas técnicas que se construyen mediante definiciones de descomposición en valores singulares de alto orden (HOSVD) e interpolaciones unidimensionales. Estos modelos sustitutos se obtienen a partir de un número limitado de cálculos CFD. Los modelos sustitutos pueden combinarse, bien con un método de optimización global de tipo algoritmo genético, o con un método local de tipo gradiente. El método resultate es flexible a la par que mucho más eficiente, computacionalmente hablando, que los modelos convencionales basados en el cálculo directo de la función objetivo, especialmente si aparecen un gran número de parámetros de diseño y/o de modelado. El método se ilustra considerando una versión simplificada del diseño conceptual de un avión. Abstract An optimization method for conceptual design in Aeronautics is presented that is based on the use of surrogate models. The various ingredients in the target function are calculated for each individual using surrogates of the associated technical disciplines that are constructed via high order singular value decomposition and one dimensional interpolation. These surrogates result from a limited number of CFD calculated snapshots. The surrogates are combined with an optimization method, which can be either a global optimization method such as a genetic algorithm or a local optimization method, such as a gradient-like method. The resulting method is both flexible and much more computationally efficient than the conventional method based on direct calculation of the target function, especially if a large number of free design parameters and/or tunablemodeling parameters are present. The method is illustrated considering a simplified version of the conceptual design of an aircraft empennage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the target localization problem of wireless visual sensor networks. Specifically, each node with a low-resolution camera extracts multiple feature points to represent the target at the sensor node level. A statistical method of merging the position information of different sensor nodes to select the most correlated feature point pair at the base station is presented. This method releases the influence of the accuracy of target extraction on the accuracy of target localization in universal coordinate system. Simulations show that, compared with other relative approach, our proposed method can generate more desirable target localization's accuracy, and it has a better trade-off between camera node usage and localization accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the detection and tracking of an unknown number of targets using a Bayesian hierarchical model with target labels. To approximate the posterior probability density function, we develop a two-layer particle filter. One deals with track initiation, and the other with track maintenance. In addition, the parallel partition method is proposed to sample the states of the surviving targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.