917 resultados para Distributed Generator, Network Loss, Primal-Dual Interior Point Algorithm, Sitting and Sizing
Resumo:
Network Theory is a prolific and lively field, especially when it approaches Biology. New concepts from this theory find application in areas where extensive datasets are already available for analysis, without the need to invest money to collect them. The only tools that are necessary to accomplish an analysis are easily accessible: a computing machine and a good algorithm. As these two tools progress, thanks to technology advancement and human efforts, wider and wider datasets can be analysed. The aim of this paper is twofold. Firstly, to provide an overview of one of these concepts, which originates at the meeting point between Network Theory and Statistical Mechanics: the entropy of a network ensemble. This quantity has been described from different angles in the literature. Our approach tries to be a synthesis of the different points of view. The second part of the work is devoted to presenting a parallel algorithm that can evaluate this quantity over an extensive dataset. Eventually, the algorithm will also be used to analyse high-throughput data coming from biology.
Resumo:
Background The World Health Organization estimates that in sub-Saharan Africa about 4 million HIV-infected patients had started antiretroviral therapy (ART) by the end of 2008. Loss of patients to follow-up and care is an important problem for treatment programmes in this region. As mortality is high in these patients compared to patients remaining in care, ART programmes with high rates of loss to follow-up may substantially underestimate mortality of all patients starting ART. Methods and Findings We developed a nomogram to correct mortality estimates for loss to follow-up, based on the fact that mortality of all patients starting ART in a treatment programme is a weighted average of mortality among patients lost to follow-up and patients remaining in care. The nomogram gives a correction factor based on the percentage of patients lost to follow-up at a given point in time, and the estimated ratio of mortality between patients lost and not lost to follow-up. The mortality observed among patients retained in care is then multiplied by the correction factor to obtain an estimate of programme-level mortality that takes all deaths into account. A web calculator directly calculates the corrected, programme-level mortality with 95% confidence intervals (CIs). We applied the method to 11 ART programmes in sub-Saharan Africa. Patients retained in care had a mortality at 1 year of 1.4% to 12.0%; loss to follow-up ranged from 2.8% to 28.7%; and the correction factor from 1.2 to 8.0. The absolute difference between uncorrected and corrected mortality at 1 year ranged from 1.6% to 9.8%, and was above 5% in four programmes. The largest difference in mortality was in a programme with 28.7% of patients lost to follow-up at 1 year. Conclusions The amount of bias in mortality estimates can be large in ART programmes with substantial loss to follow-up. Programmes should routinely report mortality among patients retained in care and the proportion of patients lost. A simple nomogram can then be used to estimate mortality among all patients who started ART, for a range of plausible mortality rates among patients lost to follow-up.
Resumo:
REASONS FOR PERFORMING STUDY: Evidence-based information is limited on distribution of local anaesthetic solution following perineural analgesia of the palmar (Pa) and palmar metacarpal (PaM) nerves in the distal aspect of the metacarpal (Mc) region ('low 4-point nerve block'). OBJECTIVES: To demonstrate the potential distribution of local anaesthetic solution after a low 4-point nerve block using a radiographic contrast model. METHODS: A radiodense contrast medium was injected subcutaneously over the medial or the lateral Pa nerve at the junction of the proximal three-quarters and distal quarter of the Mc region (Pa injection) and over the ipsilateral PaM nerve immediately distal to the distal aspect of the second or fourth Mc bones (PaM injection) in both forelimbs of 10 mature horses free from lameness. Radiographs were obtained 0, 10 and 20 min after injection and analysed subjectively and objectively. Methylene blue and a radiodense contrast medium were injected in 20 cadaver limbs using the same techniques. Radiographs were obtained and the limbs dissected. RESULTS: After 31/40 (77.5%) Pa injections, the pattern of the contrast medium suggested distribution in the neurovascular bundle. There was significant proximal diffusion with time, but the main contrast medium patch never progressed proximal to the mid-Mc region. The radiological appearance of 2 limbs suggested that contrast medium was present in the digital flexor tendon sheath (DFTS). After PaM injections, the contrast medium was distributed diffusely around the injection site in the majority of the limbs. In cadaver limbs, after Pa injections, the contrast medium and the dye were distributed in the neurovascular bundle in 8/20 (40%) limbs and in the DFTS in 6/20 (30%) of limbs. After PaM injections, the contrast and dye were distributed diffusely around the injection site in 9/20 (45%) limbs and showed diffuse and tubular distribution in 11/20 (55%) limbs. CONCLUSIONS AND POTENTIAL RELEVANCE: Proximal diffusion of local anaesthetic solution after a low 4-point nerve block is unlikely to be responsible for decreasing lameness caused by pain in the proximal Mc region. The DFTS may be penetrated inadvertently when performing a low 4-point nerve block.
Resumo:
Both deepening sleep and evolving epileptic seizures are associated with increasing slow-wave activity. Larger-scale functional networks derived from electroencephalogram indicate that in both transitions dramatic changes of communication between brain areas occur. During seizures these changes seem to be 'condensed', because they evolve more rapidly than during deepening sleep. Here we set out to assess quantitatively functional network dynamics derived from electroencephalogram signals during seizures and normal sleep. Functional networks were derived from electroencephalogram signals from wakefulness, light and deep sleep of 12 volunteers, and from pre-seizure, seizure and post-seizure time periods of 10 patients suffering from focal onset pharmaco-resistant epilepsy. Nodes of the functional network represented electrical signals recorded by single electrodes and were linked if there was non-random cross-correlation between the two corresponding electroencephalogram signals. Network dynamics were then characterized by the evolution of global efficiency, which measures ease of information transmission. Global efficiency was compared with relative delta power. Global efficiency significantly decreased both between light and deep sleep, and between pre-seizure, seizure and post-seizure time periods. The decrease of global efficiency was due to a loss of functional links. While global efficiency decreased significantly, relative delta power increased except between the time periods wakefulness and light sleep, and pre-seizure and seizure. Our results demonstrate that both epileptic seizures and deepening sleep are characterized by dramatic fragmentation of larger-scale functional networks, and further support the similarities between sleep and seizures.
Resumo:
In this paper we present BitWorker, a platform for community distributed computing based on BitTorrent. Any splittable task can be easily specified by a user in a meta-information task file, such that it can be downloaded and performed by other volunteers. Peers find each other using Distributed Hash Tables, download existing results, and compute missing ones. Unlike existing distributed computing schemes relying on centralized coordination point(s), our scheme is totally distributed, therefore, highly robust. We evaluate the performance of BitWorker using mathematical models and real tests, showing processing and robustness gains. BitWorker is available for download and use by the community.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
OBJECTIVE The aim of this cross-sectional study was to estimate bone loss of implants with platform-switching design and analyze possible risk indicators after 5 years of loading in a multi-centered private practice network. METHOD AND MATERIALS Peri-implant bone loss was measured radiographically as the distance from the implant shoulder to the mesial and distal alveolar crest, respectively. Risk factor analysis for marginal bone loss included type of implant prosthetic treatment concept and dental status of the opposite arch. RESULTS A total of 316 implants in 98 study patients after 5 years of loading were examined. The overall mean value for radiographic bone loss was 1.02 mm (SD ± 1.25 mm, 95% CI 0.90- 1.14). Correlation analyses indicated a strong association of peri-implant bone loss > 2 mm for removable implant-retained prostheses with an odds ratio of 53.8. CONCLUSION The 5-year-results of the study show clinically acceptable values of mean bone loss after 5 years of loading. Implant-supported removable prostheses seem to be a strong co-factor for extensive bone level changes compared to fixed reconstructions. However, these results have to be considered for evaluation of the included special cohort under private dental office conditions.
Resumo:
Aims. We present an inversion method based on Bayesian analysis to constrain the interior structure of terrestrial exoplanets, in the form of chemical composition of the mantle and core size. Specifically, we identify what parts of the interior structure of terrestrial exoplanets can be determined from observations of mass, radius, and stellar elemental abundances. Methods. We perform a full probabilistic inverse analysis to formally account for observational and model uncertainties and obtain confidence regions of interior structure models. This enables us to characterize how model variability depends on data and associated uncertainties. Results. We test our method on terrestrial solar system planets and find that our model predictions are consistent with independent estimates. Furthermore, we apply our method to synthetic exoplanets up to 10 Earth masses and up to 1.7 Earth radii, and to exoplanet Kepler-36b. Importantly, the inversion strategy proposed here provides a framework for understanding the level of precision required to characterize the interior of exoplanets. Conclusions. Our main conclusions are (1) observations of mass and radius are sufficient to constrain core size; (2) stellar elemental abundances (Fe, Si, Mg) are principal constraints to reduce degeneracy in interior structure models and to constrain mantle composition; (3) the inherent degeneracy in determining interior structure from mass and radius observations does not only depend on measurement accuracies, but also on the actual size and density of the exoplanet. We argue that precise observations of stellar elemental abundances are central in order to place constraints on planetary bulk composition and to reduce model degeneracy. We provide a general methodology of analyzing interior structures of exoplanets that may help to understand how interior models are distributed among star systems. The methodology we propose is sufficiently general to allow its future extension to more complex internal structures including hydrogen- and water-rich exoplanets.
Resumo:
Many of the emerging telecom services make use of Outer Edge Networks, in particular Home Area Networks. The configuration and maintenance of such services may not be under full control of the telecom operator which still needs to guarantee the service quality experienced by the consumer. Diagnosing service faults in these scenarios becomes especially difficult since there may be not full visibility between different domains. This paper describes the fault diagnosis solution developed in the MAGNETO project, based on the application of Bayesian Inference to deal with the uncertainty. It also takes advantage of a distributed framework to deploy diagnosis components in the different domains and network elements involved, spanning both the telecom operator and the Outer Edge networks. In addition, MAGNETO features self-learning capabilities to automatically improve diagnosis knowledge over time and a partition mechanism that allows breaking down the overall diagnosis knowledge into smaller subsets. The MAGNETO solution has been prototyped and adapted to a particular outer edge scenario, and has been further validated on a real testbed. Evaluation of the results shows the potential of our approach to deal with fault management of outer edge networks.
Resumo:
We present the design of a distributed object system for Prolog, based on adding remote execution and distribution capabilities to a previously existing object system. Remote execution brings RPC into a Prolog system, and its semantics is easy to express in terms of well-known Prolog builtins. The final distributed object design features state mobility and user-transparent network behavior. We sketch an implementation which provides distributed garbage collection and some degree of tolerance to network failures. We provide a preliminary study of the overhead of the communication mechanism for some test cases.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
High-resolution monochromated electron energy loss spectroscopy (EELS) at subnanometric spatial resolution and <200 meV energy resolution has been used to assess the valence band properties of a distributed Bragg reflector multilayer heterostructure composed of InAlN lattice matched to GaN. This work thoroughly presents the collection of methods and computational tools put together for this task. Among these are zero-loss-peak subtraction and nonlinear fitting tools, and theoretical modeling of the electron scattering distribution. EELS analysis allows retrieval of a great amount of information: indium concentration in the InAlN layers is monitored through the local plasmon energy position and calculated using a bowing parameter version of Vegard Law. Also a dielectric characterization of the InAlN and GaN layers has been performed through Kramers-Kronig analysis of the Valence-EELS data, allowing band gap energy to be measured and an insight on the polytypism of the GaN layers.
Neural network controller for active demand side management with PV energy in the residential sector
Resumo:
In this paper, we describe the development of a control system for Demand-Side Management in the residential sector with Distributed Generation. The electrical system under study incorporates local PV energy generation, an electricity storage system, connection to the grid and a home automation system. The distributed control system is composed of two modules: a scheduler and a coordinator, both implemented with neural networks. The control system enhances the local energy performance, scheduling the tasks demanded by the user and maximizing the use of local generation.
Resumo:
We study a cognitive radio scenario in which the network of sec- ondary users wishes to identify which primary user, if any, is trans- mitting. To achieve this, the nodes will rely on some form of location information. In our previous work we proposed two fully distributed algorithms for this task, with and without a pre-detection step, using propagation parameters as the only source of location information. In a real distributed deployment, each node must estimate its own po- sition and/or propagation parameters. Hence, in this work we study the effect of uncertainty, or error in these estimates on the proposed distributed identification algorithms. We show that the pre-detection step significantly increases robustness against uncertainty in nodes' locations.