869 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
(1) A mathematical theory for computing the probabilities of various nucleotide configurations is developed, and the probability of obtaining the correct phylogenetic tree (model tree) from sequence data is evaluated for six phylogenetic tree-making methods (UPGMA, distance Wagner method, transformed distance method, Fitch-Margoliash's method, maximum parsimony method, and compatibility method). The number of nucleotides (m*) necessary to obtain the correct tree with a probability of 95% is estimated with special reference to the human, chimpanzee, and gorilla divergence. m* is at least 4,200, but the availability of outgroup species greatly reduces m* for all methods except UPGMA. m* increases if transitions occur more frequently than transversions as in the case of mitochondrial DNA. (2) A new tree-making method called the neighbor-joining method is proposed. This method is applicable either for distance data or character state data. Computer simulation has shown that the neighbor-joining method is generally better than UPGMA, Farris' method, Li's method, and modified Farris method on recovering the true topology when distance data are used. A related method, the simultaneous partitioning method, is also discussed. (3) The maximum likelihood (ML) method for phylogeny reconstruction under the assumption of both constant and varying evolutionary rates is studied, and a new algorithm for obtaining the ML tree is presented. This method gives a tree similar to that obtained by UPGMA when constant evolutionary rate is assumed, whereas it gives a tree similar to that obtained by the maximum parsimony tree and the neighbor-joining method when varying evolutionary rate is assumed. ^
Resumo:
We present an image quality assessment and enhancement method for high-resolution Fourier-Domain OCT imaging like in sub-threshold retina therapy. A Maximum-Likelihood deconvolution algorithm as well as a histogram-based quality assessment method are evaluated.
Resumo:
A search is conducted for non-resonant new phenomena in dielectron and dimuon final states, originating from either contact interactions or large extra spatial dimensions. The LHC 2012 proton–proton collision dataset recorded by the ATLAS detector is used, corresponding to 20 fb−1 at √ s = 8 TeV. The dilepton invariant mass spectrum is a discriminating variable in both searches, with the contact interaction search additionally utilizing the dilepton forward-backward asymmetry. No significant deviations from the Standard Model expectation are observed. Lower limits are set on the ℓℓqq contact interaction scale ʌ between 15.4 TeVand 26.3 TeV, at the 95%credibility level. For large extra spatial dimensions, lower limits are set on the string scale MS between 3.2 TeV to 5.0 TeV.
Resumo:
BACKGROUND The Pulmonary Embolism Quality of Life questionnaire (PEmb-QoL) is a 40-item questionnaire to measure health-related quality of life in patients with pulmonary embolism. It covers six 6 dimensions: frequency of complaints, limitations in activities of daily living, work-related problems, social limitations, intensity of complaints, and emotional complaints. Originally developed in Dutch and English, we prospectively validated a German version of the PEmb-QoL. METHODS A forward-backward translation of the English version of the PEmb-QoL into German was performed. German-speaking consecutive adult patients aged ≥18 years with an acute, objectively confirmed pulmonary embolism discharged from a Swiss university hospital (01/2011-06/2013) were recruited telephonically. Established psychometric tests and criteria were used to evaluate the acceptability, reliability, and validity of the German PEmb-QoL questionnaire. To assess the underlying dimensions, an exploratory factor analysis was performed. RESULTS Overall, 102 patients were enrolled in the study. The German version of the PEmb-QoL showed a good internal consistency (Cronbach's alpha ranging from 0.72 to 0.96), item-total (0.53-0.95) and inter-item correlations (>0.4), and test-retest reliability (intra-class correlation coefficients 0.59-0.89) for the dimension scores. A moderate correlation of the PEmb-QoL with SF-36 dimension and summary scores (0.21-0.83) indicated convergent validity, while low correlations of PEmb-QoL dimensions with clinical characteristics (-0.16-0.37) supported discriminant validity. The exploratory factor analysis suggested four underlying dimensions: limitations in daily activities, symptoms, work-related problems, and emotional complaints. CONCLUSION The German version of the PEmb-QoL questionnaire is a valid and reliable disease-specific measure for quality of life in patients with pulmonary embolism.
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^
Resumo:
Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^
Resumo:
Many public health agencies and researchers are interested in comparing hospital outcomes, for example, morbidity, mortality, and hospitalization across areas and hospitals. However, since there is variation of rates in clinical trials among hospitals because of several biases, we are interested in controlling for the bias and assessing real differences in clinical practices. In this study, we compared the variations between hospitals in rates of severe Intraventricular Haemorrhage (IVH) infant using Frequentist statistical approach vs. Bayesian hierarchical model through simulation study. The template data set for simulation study was included the number of severe IVH infants of 24 intensive care units in Australian and New Zealand Neonatal Network from 1995 to 1997 in severe IVH rate in preterm babies. We evaluated the rates of severe IVH for 24 hospitals with two hierarchical models in Bayesian approach comparing their performances with the shrunken rates in Frequentist method. Gamma-Poisson (BGP) and Beta-Binomial (BBB) were introduced into Bayesian model and the shrunken estimator of Gamma-Poisson (FGP) hierarchical model using maximum likelihood method were calculated as Frequentist approach. To simulate data, the total number of infants in each hospital was kept and we analyzed the simulated data for both Bayesian and Frequentist models with two true parameters for severe IVH rate. One was the observed rate and the other was the expected severe IVH rate by adjusting for five predictors variables for the template data. The bias in the rate of severe IVH infant estimated by both models showed that Bayesian models gave less variable estimates than Frequentist model. We also discussed and compared the results from three models to examine the variation in rate of severe IVH by 20th centile rates and avoidable number of severe IVH cases. ^
Resumo:
Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^
Resumo:
ACCURACY OF THE BRCAPRO RISK ASSESSMENT MODEL IN MALES PRESENTING TO MD ANDERSON FOR BRCA TESTING Publication No. _______ Carolyn A. Garby, B.S. Supervisory Professor: Banu Arun, M.D. Hereditary Breast and Ovarian Cancer (HBOC) syndrome is due to mutations in BRCA1 and BRCA2 genes. Women with HBOC have high risks to develop breast and ovarian cancers. Males with HBOC are commonly overlooked because male breast cancer is rare and other male cancer risks such as prostate and pancreatic cancers are relatively low. BRCA genetic testing is indicated for men as it is currently estimated that 4-40% of male breast cancers result from a BRCA1 or BRCA2 mutation (Ottini, 2010) and management recommendations can be made based on genetic test results. Risk assessment models are available to provide the individualized likelihood to have a BRCA mutation. Only one study has been conducted to date to evaluate the accuracy of BRCAPro in males and was based on a cohort of Italian males and utilized an older version of BRCAPro. The objective of this study is to determine if BRCAPro5.1 is a valid risk assessment model for males who present to MD Anderson Cancer Center for BRCA genetic testing. BRCAPro has been previously validated for determining the probability of carrying a BRCA mutation, however has not been further examined particularly in males. The total cohort consisted of 152 males who had undergone BRCA genetic testing. The cohort was stratified by indication for genetic counseling. Indications included having a known familial BRCA mutation, having a personal diagnosis of a BRCA-related cancer, or having a family history suggestive of HBOC. Overall there were 22 (14.47%) BRCA1+ males and 25 (16.45%) BRCA2+ males. Receiver operating characteristic curves were constructed for the cohort overall, for each particular indication, as well as for each cancer subtype. Our findings revealed that the BRCAPro5.1 model had perfect discriminating ability at a threshold of 56.2 for males with breast cancer, however only 2 (4.35%) of 46 were found to have BRCA2 mutations. These results are significantly lower than the high approximation (40%) reported in previous literature. BRCAPro does perform well in certain situations for men. Future investigation of male breast cancer and men at risk for BRCA mutations is necessary to provide a more accurate risk assessment.
Resumo:
Injection drug use is the third most frequent risk factor for new HIV infections in the United States. A dual mode of exposure: unsafe drug using practices and risky sexual behaviors underlies injection drug users' (IDUs) risk for HIV infection. This research study aims to characterize patterns of drug use and sexual behaviors and to examine the social contexts associated with risk behaviors among a sample of injection drug users. ^ This cross-sectional study includes 523 eligible injection drug users from Houston, Texas, recruited into the 2009 National HIV Behavioral Surveillance project. Three separate set of analyses were carried out. First, using latent class analysis (LCA) and maximum likelihood we identified classes of behavior describing levels of HIV risk, from nine drug and sexual behaviors. Second, eight separate multivariable regression models were built to examine the odds of reporting a given risk behavior. We constructed the most parsimonious multivariable model using a manual backward stepwise process. Third, we examined whether HIV serostatus knowledge (self-reported positive, negative, or unknown serostatus) is associated with drug use and sexual HIV risk behaviors. ^ Participants were mostly male, older, and non-Hispanic Black. Forty-two percent of our sample had behaviors putting them at high risk, 25% at moderate risk, and 33% at low risk for HIV infection. Individuals in the High-risk group had the highest probability of risky behaviors, categorized as almost always sharing needles (0.93), seldom using condoms (0.10), reporting recent exchange sex partners (0.90), and practicing anal sex (0.34). We observed that unsafe injecting practices were associated with high risk sexual behaviors. IDUs who shared needles had higher odds of having anal sex (OR=2.89, 95%CI: 1.69-4.92) and unprotected sex (OR=2.66, 95%CI: 1.38-5.10) at last sex. Additionally, homelessness was associated with needle sharing (OR=2.24, 95% CI: 1.34-3.76) and cocaine use was associated with multiple sex partners (OR=1.82, 95% CI: 1.07-3.11). Furthermore, twenty-one percent of the sample was unaware of their HIV serostatus. The three groups were not different from each other in terms of drug-use behaviors: always using a new sterile needle, or in sharing needles or drug preparation equipment. However, IDUs unaware of their HIV serostatus were 33% more likely to report having more than three sexual partners in the past 12 months; 45% more likely to report to have unprotected sex and 85% more likely to use drug and or alcohol during or before at last sex compared to HIV-positive IDUs. ^ This analysis underscores the merit of LCA approach to empirically categorize injection drug users into distinct classes and identify their risk pattern using multiple indicators and our results show considerable overlap of high risk sexual and drug use behaviors among the high-risk class members. The observed clustering pattern of drug and sexual risk behavior among this population confirms that injection drug users do not represent a homogeneous population in terms of HIV risk. These findings will help develop tailored prevention programs.^
Resumo:
It is well known that an identification problem exists in the analysis of age-period-cohort data because of the relationship among the three factors (date of birth + age at death = date of death). There are numerous suggestions about how to analyze the data. No one solution has been satisfactory. The purpose of this study is to provide another analytic method by extending the Cox's lifetable regression model with time-dependent covariates. The new approach contains the following features: (1) It is based on the conditional maximum likelihood procedure using a proportional hazard function described by Cox (1972), treating the age factor as the underlying hazard to estimate the parameters for the cohort and period factors. (2) The model is flexible so that both the cohort and period factors can be treated as dummy or continuous variables, and the parameter estimations can be obtained for numerous combinations of variables as in a regression analysis. (3) The model is applicable even when the time period is unequally spaced.^ Two specific models are considered to illustrate the new approach and applied to the U.S. prostate cancer data. We find that there are significant differences between all cohorts and there is a significant period effect for both whites and nonwhites. The underlying hazard increases exponentially with age indicating that old people have much higher risk than young people. A log transformation of relative risk shows that the prostate cancer risk declined in recent cohorts for both models. However, prostate cancer risk declined 5 cohorts (25 years) earlier for whites than for nonwhites under the period factor model (0 0 0 1 1 1 1). These latter results are similar to the previous study by Holford (1983).^ The new approach offers a general method to analyze the age-period-cohort data without using any arbitrary constraint in the model. ^
Resumo:
Documenting changes in distribution is necessary for understanding species' response to environmental changes, but data on species distributions are heterogeneous in accuracy and resolution. Combining different data sources and methodological approaches can fill gaps in knowledge about the dynamic processes driving changes in species-rich, but data-poor regions. We combined recent bird survey data from the Neotropical Biodiversity Mapping Initiative (NeoMaps) with historical distribution records to estimate potential changes in the distribution of eight species of Amazon parrots in Venezuela. Using environmental covariates and presence-only data from museum collections and the literature, we first used maximum likelihood to fit a species distribution model (SDM) estimating a historical maximum probability of occurrence for each species. We then used recent, NeoMaps survey data to build single-season occupancy models (OM) with the same environmental covariates, as well as with time- and effort-dependent detectability, resulting in estimates of the current probability of occurrence. We finally calculated the disagreement between predictions as a matrix of probability of change in the state of occurrence. Our results suggested negative changes for the only restricted, threatened species, Amazona barbadensis, which has been independently confirmed with field studies. Two of the three remaining widespread species that were detected, Amazona amazonica, Amazona ochrocephala, also had a high probability of negative changes in northern Venezuela, but results were not conclusive for Amazona farinosa. The four remaining species were undetected in recent field surveys; three of these were most probably absent from the survey locations (Amazona autumnalis, Amazona mercenaria and Amazona festiva), while a fourth (Amazona dufresniana) requires more intensive targeted sampling to estimate its current status. Our approach is unique in taking full advantage of available, but limited data, and in detecting a high probability of change even for rare and patchily-distributed species. However, it is presently limited to species meeting the strong assumptions required for maximum-likelihood estimation with presence-only data, including very high detectability and representative sampling of its historical distribution.
Resumo:
The continental margin off northeast Australia, comprising the Great Barrier Reef (GBR) platform and Queensland Trough, is the largest tropical mixed siliciclastic/carbonate depositional system in existence. We describe a suite of 35 piston cores and two Ocean Drilling Program (ODP) sites from a 130*240 km rectangular area of the Queensland Trough, the slope and basin setting east of the central GBR platform. Oxygen isotope records, physical property (magnetic susceptibility and greyscale) logs, analyses of bulk carbonate content and radiocarbon ages at these locations are used to construct a high resolution stratigraphy. This information is used to quantify mass accumulation rates (MARs) for siliciclastic and carbonate sediments accumulating in the Queensland Trough over the last 31,000 years. For the slope, highest MARs of siliciclastic sediment occur during transgression (1.0 Million Tonnes per year; MT/yr), and lowest MARs of siliciclastic (<0.1 MT/yr) and carbonate (0.2 MT/yr) sediment occur during sea level lowstand. Carbonate MARs are similar to siliciclastic MARs for transgression and highstand (1.1-1.4 MT/yr). In contrast, for the basin, MARs of siliciclastic (0-0.1 MT/yr) and carbonate sediment (0.2-0.4 MT/yr) are continuously low, and within a factor of two, for lowstand, transgression, and highstand. Generic models for carbonate margins predict that maximum and minimum carbonate MARs on the slope will occur during highstand and lowstand, respectively. Conversely, most models for siliciclastic margins suggest maximum and minimum siliciclastic MARs will occur during lowstand and transgression, respectively. Although carbonate MARs in the Queensland Trough are similar to those predicted for carbonate depositional systems, siliciclastic MARs are the opposite. Given uniform siliciclastic MARs in the basin through time, we conclude that terrigenous material is stored on the shelf during sea level lowstand, and released to the slope during transgression as wave driven currents transport shelf sediment offshore.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.