940 resultados para Optimal power flow (OPF)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES This study compared clinical outcomes and revascularization strategies among patients presenting with low ejection fraction, low-gradient (LEF-LG) severe aortic stenosis (AS) according to the assigned treatment modality. BACKGROUND The optimal treatment modality for patients with LEF-LG severe AS and concomitant coronary artery disease (CAD) requiring revascularization is unknown. METHODS Of 1,551 patients, 204 with LEF-LG severe AS (aortic valve area <1.0 cm(2), ejection fraction <50%, and mean gradient <40 mm Hg) were allocated to medical therapy (MT) (n = 44), surgical aortic valve replacement (SAVR) (n = 52), or transcatheter aortic valve replacement (TAVR) (n = 108). CAD complexity was assessed using the SYNTAX score (SS) in 187 of 204 patients (92%). The primary endpoint was mortality at 1 year. RESULTS LEF-LG severe AS patients undergoing SAVR were more likely to undergo complete revascularization (17 of 52, 35%) compared with TAVR (8 of 108, 8%) and MT (0 of 44, 0%) patients (p < 0.001). Compared with MT, both SAVR (adjusted hazard ratio [adj HR]: 0.16; 95% confidence interval [CI]: 0.07 to 0.38; p < 0.001) and TAVR (adj HR: 0.30; 95% CI: 0.18 to 0.52; p < 0.001) improved survival at 1 year. In TAVR and SAVR patients, CAD severity was associated with higher rates of cardiovascular death (no CAD: 12.2% vs. low SS [0 to 22], 15.3% vs. high SS [>22], 31.5%; p = 0.037) at 1 year. Compared with no CAD/complete revascularization, TAVR and SAVR patients undergoing incomplete revascularization had significantly higher 1-year cardiovascular death rates (adj HR: 2.80; 95% CI: 1.07 to 7.36; p = 0.037). CONCLUSIONS Among LEF-LG severe AS patients, SAVR and TAVR improved survival compared with MT. CAD severity was associated with worse outcomes and incomplete revascularization predicted 1-year cardiovascular mortality among TAVR and SAVR patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frontal alpha band asymmetry (FAA) is a marker of altered reward processing in major depressive disorder (MDD), associated with reduced approach behavior and withdrawal. However, its association with brain metabolism remains unclear. The aim of this study is to investigate FAA and its correlation with resting – state cerebral blood flow (rCBF). We hypothesized an association of FAA with regional rCBF in brain regions relevant for reward processing and motivated behavior, such as the striatum. We enrolled 20 patients and 19 healthy subjects. FAA scores and rCBF were quantified with the use of EEG and arterial spin labeling. Correlations of the two were evaluated, as well as the association with FAA and psychometric assessments of motivated behavior and anhedonia. Patients showed a left – lateralized pattern of frontal alpha activity and a correlation of FAA lateralization with subscores of Hamilton Depression Rating Scale linked to motivated behavior. An association of rCBF and FAA scores was found in clusters in the dorsolateral prefrontal cortex bilaterally (patients) and in the left medial frontal gyrus, in the right caudate head and in the right inferior parietal lobule (whole group). No correlations were found in healthy controls. Higher inhibitory right – lateralized alpha power was associated with lower rCBF values in prefrontal and striatal regions, predominantly in the right hemisphere, which are involved in the processing of motivated behavior and reward. Inhibitory brain activity in the reward system may contribute to some of the motivational problems observed in MDD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a power-scalable approach for yellow laser-light generation based on standard Ytterbium (Yb) doped fibers. To force the cavity to lase at 1154 nm, far above the gain-maximum, measures must be taken to fulfill lasing condition and to suppress competing amplified spontaneous emission (ASE) in the high-gain region. To prove the principle we built a fiber-laser cavity and a fiber-amplifier both at 1154 nm. In between cavity and amplifier we suppressed the ASE by 70 dB using a fiber Bragg grating (FBG) based filter. Finally we demonstrated efficient single pass frequency doubling to 577 nm with a periodically poled lithium niobate crystal (PPLN). With our linearly polarized 1154 nm master oscillator power fiber amplifier (MOFA) system we achieved slope efficiencies of more than 15 % inside the cavity and 24 % with the fiber-amplifier. The frequency doubling followed the predicted optimal efficiency achievable with a PPLN crystal. So far we generated 1.5 W at 1154nm and 90 mW at 577 nm. Our MOFA approach for generation of 1154 nm laser radiation is power-scalable by using multi-stage amplifiers and large mode-area fibers and is therefore very promising for building a high power yellow laser-light source of several tens of Watt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maternal thromboembolism and a spectrum of placenta-mediated complications including the pre-eclampsia syndromes, fetal growth restriction, fetal loss, and abruption manifest a shared etiopathogenesis and predisposing risk factors. Furthermore, these maternal and fetal complications are often linked to subsequent maternal health consequences that comprise the metabolic syndrome, namely, thromboembolism, chronic hypertension, and type II diabetes. Traditionally, several lines of evidence have linked vasoconstriction, excessive thrombosis and inflammation, and impaired trophoblast invasion at the uteroplacental interface as hallmark features of the placental complications. "Omic" technologies and biomarker development have been largely based upon advances in vascular biology, improved understanding of the molecular basis and biochemical pathways responsible for the clinically relevant diseases, and increasingly robust large cohort and/or registry based studies. Advances in understanding of innate and adaptive immunity appear to play an important role in several pregnancy complications. Strategies aimed at improving prediction of these pregnancy complications are often incorporating hemodynamic blood flow data using non-invasive imaging technologies of the utero-placental and maternal circulations early in pregnancy. Some evidence suggests that a multiple marker approach will yield the best performing prediction tools, which may then in turn offer the possibility of early intervention to prevent or ameliorate these pregnancy complications. Prediction of maternal cardiovascular and non-cardiovascular consequences following pregnancy represents an important area of future research, which may have significant public health consequences not only for cardiovascular disease, but also for a variety of other disorders, such as autoimmune and neurodegenerative diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to assess the experience of flow and its relationship with the personality traits and the age of the adolescents. For this purpose, 224 participants of both sexes were selected, aged 12-20 years, who were examined with various tools: Flow State in adolescents (Leibovich de Figueroa; Schmidt, 2013). This is a self-report technique of 28 items that assesses the Flow State, covering all the aspects theoretically listed as components in the optimal experience of enjoyment. And a self-report Being a teenager nowadays, which evaluates 33 pairs of opposite personality characteristics that represent the personality domains of the NEO-PI-R (Costa; McCrae, 1992. Costa; McCrae, 2005, Leibovich; Schmidt, 2005). Among the found results, it was observed that in the adolescents with high scores on the scale of Flow State, the main personality trait was extroversion. Also, the influence of age on optimal flow experience appears in the chosen activities

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to assess the experience of flow and its relationship with the personality traits and the age of the adolescents. For this purpose, 224 participants of both sexes were selected, aged 12-20 years, who were examined with various tools: Flow State in adolescents (Leibovich de Figueroa; Schmidt, 2013). This is a self-report technique of 28 items that assesses the Flow State, covering all the aspects theoretically listed as components in the optimal experience of enjoyment. And a self-report Being a teenager nowadays, which evaluates 33 pairs of opposite personality characteristics that represent the personality domains of the NEO-PI-R (Costa; McCrae, 1992. Costa; McCrae, 2005, Leibovich; Schmidt, 2005). Among the found results, it was observed that in the adolescents with high scores on the scale of Flow State, the main personality trait was extroversion. Also, the influence of age on optimal flow experience appears in the chosen activities

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to assess the experience of flow and its relationship with the personality traits and the age of the adolescents. For this purpose, 224 participants of both sexes were selected, aged 12-20 years, who were examined with various tools: Flow State in adolescents (Leibovich de Figueroa; Schmidt, 2013). This is a self-report technique of 28 items that assesses the Flow State, covering all the aspects theoretically listed as components in the optimal experience of enjoyment. And a self-report Being a teenager nowadays, which evaluates 33 pairs of opposite personality characteristics that represent the personality domains of the NEO-PI-R (Costa; McCrae, 1992. Costa; McCrae, 2005, Leibovich; Schmidt, 2005). Among the found results, it was observed that in the adolescents with high scores on the scale of Flow State, the main personality trait was extroversion. Also, the influence of age on optimal flow experience appears in the chosen activities

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Literature on agency problems arising between controlling and minority owners claim that separation of cash flow and control rights allows controllers to expropriate listed firms, and further that separation emerges when dual class shares or pyramiding corporate structures exist. Dual class share and pyramiding coexisted in listed companies of China until discriminated share reform was implemented in 2005. This paper presents a model of controller to expropriate behavior as well as empirical tests of expropriation via particular accounting items and pyramiding generated expropriation. Results show that expropriation is apparent for state controlled listed companies. While reforms have weakened the power to expropriate, separation remains and still generates expropriation. Size of expropriation is estimated to be 7 to 8 per cent of total asset at mean. If the "one share, one vote" principle were to be realized, asset inflation could be reduced by 13 percent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.