59 resultados para Distributed Generator, Network Loss, Primal-Dual Interior Point Algorithm, Sitting and Sizing
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.
Resumo:
An autonomous energy source within a human body is of key importance in the development of medical implants. This work deals with the modelling and the validation of an energy harvesting device which converts the myocardial contractions into electrical energy. The mechanism consists of a clockwork from a commercially available wrist watch. We developed a physical model which is able to predict the total amount of energy generated when applying an external excitation. For the validation of the model, a custom-made hexapod robot was used to accelerate the harvesting device along a given trajectory. We applied forward kinematics to determine the actual motion experienced by the harvesting device. The motion provides translational as well as rotational motion information for accurate simulations in three-dimensional space. The physical model could be successfully validated.
Resumo:
In this work, we propose a novel network coding enabled NDN architecture for the delivery of scalable video. Our scheme utilizes network coding in order to address the problem that arises in the original NDN protocol, where optimal use of the bandwidth and caching resources necessitates the coordination of the forwarding decisions. To optimize the performance of the proposed network coding based NDN protocol and render it appropriate for transmission of scalable video, we devise a novel rate allocation algorithm that decides on the optimal rates of Interest messages sent by clients and intermediate nodes. This algorithm guarantees that the achieved flow of Data objects will maximize the average quality of the video delivered to the client population. To support the handling of Interest messages and Data objects when intermediate nodes perform network coding, we modify the standard NDN protocol and introduce the use of Bloom filters, which store efficiently additional information about the Interest messages and Data objects. The proposed architecture is evaluated for transmission of scalable video over PlanetLab topologies. The evaluation shows that the proposed scheme performs very close to the optimal performance
Resumo:
Content-Centric Networking (CCN) naturally supports multi-path communication, as it allows the simultaneous use of multiple interfaces (e.g. LTE and WiFi). When multiple sources and multiple clients are considered, the optimal set of distribution trees should be determined in order to optimally use all the available interfaces. This is not a trivial task, as it is a computationally intense procedure that should be done centrally. The need for central coordination can be removed by employing network coding, which also offers improved resiliency to errors and large throughput gains. In this paper, we propose NetCodCCN, a protocol for integrating network coding in CCN. In comparison to previous works proposing to enable network coding in CCN, NetCodCCN permit Interest aggregation and Interest pipelining, which reduce the data retrieval times. The experimental evaluation shows that the proposed protocol leads to significant improvements in terms of content retrieval delay compared to the original CCN. Our results demonstrate that the use of network coding adds robustness to losses and permits to exploit more efficiently the available network resources. The performance gains are verified for content retrieval in various network scenarios.
Resumo:
Objective Although osteopenia is frequent in spondyloarthritis (SpA), the underlying cellular mechanisms and association with other symptoms are poorly understood. This study aimed to characterize bone loss during disease progression, determine cellular alterations, and assess the contribution of inflammatory bowel disease (IBD) to bone loss in HLA-B27 transgenic rats. Methods Bones of 2-, 6-, and 12-month-old non-transgenic, disease-free HLA-B7 and disease-associated HLA-B27 transgenic rats were examined using peripheral quantitative computed tomography, μCT, and nanoindentation. Cellular characteristics were determined by histomorphometry and ex vivo cultures. The impact of IBD was determined using [21-3 x 283-2]F1 rats, which develop arthritis and spondylitis, but not IBD. Results HLA-B27 transgenic rats continuously lost bone mass with increasing age and had impaired bone material properties, leading to a 3-fold decrease in bone strength at 12 months of age. Bone turnover was increased in HLA-B27 transgenic rats, as evidenced by a 3-fold increase in bone formation and a 6-fold increase in bone resorption parameters. Enhanced osteoclastic markers were associated with a larger number of precursors in the bone marrow and a stronger osteoclastogenic response to RANKL or TNFα. Further, IBD-free [21-3 x 283-2]F1 rats also displayed decreased total and trabecular bone density. Conclusions HLA-B27 transgenic rats lose an increasing amount of bone density and strength with progressing age, which is primarily mediated via increased bone remodeling in favor of bone resorption. Moreover, IBD and bone loss seem to be independent features of SpA in HLA-B27 transgenic rats.
Resumo:
Despite various research activities in the last decades across the world, many challenges remain to integrate the concept of ecosystem services (ESS) in decision-making, and a coherent approach to assess and value ESS is still lacking. There are a lot of different – often context-specific – ESS frameworks with their own definitions and understanding of terms. Based on a thorough review, the EU FP7 project RECARE (www.recare-project.eu) suggests an adapted framework for ecosystem services related to soils that can be used for practical application in preventing and remediating degradation of soils in Europe. This lays the foundation for the development and selection of appropriate methods to measure, evaluate, communicate and negotiate the services we obtain from soils with stakeholders in order to improve land management. Similar to many ESS frameworks, the RECARE framework distinguishes between an ecosystem and human well-being part. As the RECARE project is focused on soil threats, this is the starting point on the ecosystem part of the framework. Soil threats affect natural capital, such as soil, water, vegetation, air and animals, and are in turn influenced by those. Within the natural capital, the RECARE framework focuses especially on soil and its properties, classified in inherent and manageable properties. The natural capital then enables and underpins soil processes, while at the same time being affected by those. Soil processes, finally, are the ecosystem’s capacity to provide services, thus they support the provision of soil functions and ESS. ESS may be utilized to produce benefits for individuals and human society. Those benefits are explicitly or implicitly valued by individuals and human society. The values placed on those benefits influence policy and decision-making and thus lead to a societal response. Individual (e.g. farmers’) and societal decision making and policy determine land management and other (human) driving forces, which in turn affect soil threats and natural capital. In order to improve ESS with Sustainable Land Management (SLM) – i.e. measures aimed to prevent or remediate soil threats, the services identified in the framework need to be “manageable” (modifiable) for the stakeholders. To this end, effects of soil threats and prevention / remediation measures are captured by key soil properties as well as through bio-physical (e.g. reduced soil loss), socio-economic (e.g. reduced workload) and socio-cultural (e.g. aesthetics) impact indicators. In order to use such indicators in RECARE, it should be possible to associate the changes in soil processes to impacts of prevention / remediation measures (SLM). This requires the indicators to be sensitive enough to small changes, but still sufficiently robust to provide evidence of the change and attribute it to SLM.
Resumo:
Basilar artery occlusion (BAO) is one of the most devastating forms of stroke and few patients have good outcomes without recanalization. Most centers apply recanalization therapies for BAO up to 12-24 hours after symptom onset, which is a substantially longer time window than the 4.5 hours used in anterior circulation stroke. In this speculative synthesis, we discuss recent advances in BAO treatment in order to understand why and under which circumstances longer symptom duration might not necrotize the brainstem and turn therapeutic attempts futile. We raise the possibility that distinct features of the posterior circulation, e.g., highly developed, persistent collateral arterial network, reverse filling of the distal basilar artery, and delicate plasma flow siding the clot, might sustain brittle patency of brainstem perforators in the face of stepwise growth of the thrombus. Meanwhile, the tissue clock characterizing the rapid necrosis of a typical anterior circulation penumbra will not start. During this perilous time period, recanalization at any point would salvage the brainstem from eventual necrosis caused by imminent reinforcement and further building up of the clot.
Resumo:
Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40-70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory's own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.
Resumo:
BACKGROUND Multiple scores have been proposed to stratify bleeding risk, but their value to guide dual antiplatelet therapy duration has never been appraised. We compared the performance of the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines), ACUITY (Acute Catheterization and Urgent Intervention Triage Strategy), and HAS-BLED (Hypertension, Abnormal Renal/Liver Function, Stroke, Bleeding History or Predisposition, Labile INR, Elderly, Drugs/Alcohol Concomitantly) scores in 1946 patients recruited in the Prolonging Dual Antiplatelet Treatment After Grading Stent-Induced Intimal Hyperplasia Study (PRODIGY) and assessed hemorrhagic and ischemic events in the 24- and 6-month dual antiplatelet therapy groups. METHODS AND RESULTS Bleeding score performance was assessed with a Cox regression model and C statistics. Discriminative and reclassification power was assessed with net reclassification improvement and integrated discrimination improvement. The C statistic was similar between the CRUSADE score (area under the curve 0.71) and ACUITY (area under the curve 0.68), and higher than HAS-BLED (area under the curve 0.63). CRUSADE, but not ACUITY, improved reclassification (net reclassification index 0.39, P=0.005) and discrimination (integrated discrimination improvement index 0.0083, P=0.021) of major bleeding compared with HAS-BLED. Major bleeding and transfusions were higher in the 24- versus 6-month dual antiplatelet therapy groups in patients with a CRUSADE score >40 (hazard ratio for bleeding 2.69, P=0.035; hazard ratio for transfusions 4.65, P=0.009) but not in those with CRUSADE score ≤40 (hazard ratio for bleeding 1.50, P=0.25; hazard ratio for transfusions 1.37, P=0.44), with positive interaction (Pint=0.05 and Pint=0.01, respectively). The number of patients with high CRUSADE scores needed to treat for harm for major bleeding and transfusion were 17 and 15, respectively, with 24-month rather than 6-month dual antiplatelet therapy; corresponding figures in the overall population were 67 and 71, respectively. CONCLUSIONS Our analysis suggests that the CRUSADE score predicts major bleeding similarly to ACUITY and better than HAS BLED in an all-comer population with percutaneous coronary intervention and potentially identifies patients at higher risk of hemorrhagic complications when treated with a long-term dual antiplatelet therapy regimen. CLINICAL TRIAL REGISTRATION URL: http://clinicaltrials.gov. Unique identifier: NCT00611286.
Resumo:
PURPOSE To assess possible effects of working memory (WM) training on cognitive functionality, functional MRI and brain connectivity in patients with juvenile MS. METHODS Cognitive status, fMRI and inter-network connectivity were assessed in 5 cases with juvenile MS aged between 12 and 18 years. Afterwards they received a computerized WM training for four weeks. Primary cognitive outcome measures were WM (visual and verbal) and alertness. Activation patterns related to WM were assessed during fMRI using an N-Back task with increasing difficulty. Inter-network connectivity analyses were focused on fronto-parietal (left and right), default-mode (dorsal and ventral) and the anterior salience network. Cognitive functioning, fMRI and inter-network connectivity were reassessed directly after the training and again nine months following training. RESULTS Response to treatment was seen in two patients. These patients showed increased performance in WM and alertness after the training. These behavioural changes were accompanied by increased WM network activation and systematic changes in inter-network connectivity. The remaining participants were non-responders to treatment. Effects on cognitive performance were maintained up to nine months after training, whereas effects observed by fMRI disappeared. CONCLUSIONS Responders revealed training effects on all applied outcome measures. Disease activity and general intelligence may be factors associated with response to treatment.
Resumo:
BACKGROUND Antiretroviral therapy (ART) initiation is now recommended irrespective of CD4 count. However data on the relationship between CD4 count at ART initiation and loss to follow-up (LTFU) are limited and conflicting. METHODS We conducted a cohort analysis including all adults initiating ART (2008-2012) at three public sector sites in South Africa. LTFU was defined as no visit in the 6 months before database closure. The Kaplan-Meier estimator and Cox's proportional hazards models examined the relationship between CD4 count at ART initiation and 24-month LTFU. Final models were adjusted for demographics, year of ART initiation, programme expansion and corrected for unascertained mortality. RESULTS Among 17 038 patients, the median CD4 at initiation increased from 119 (IQR 54-180) in 2008 to 257 (IQR 175-318) in 2012. In unadjusted models, observed LTFU was associated with both CD4 counts <100 cells/μL and CD4 counts ≥300 cells/μL. After adjustment, patients with CD4 counts ≥300 cells/μL were 1.35 (95% CI 1.12 to 1.63) times as likely to be LTFU after 24 months compared to those with a CD4 150-199 cells/μL. This increased risk for patients with CD4 counts ≥300 cells/μL was largest in the first 3 months on treatment. Correction for unascertained deaths attenuated the association between CD4 counts <100 cells/μL and LTFU while the association between CD4 counts ≥300 cells/μL and LTFU persisted. CONCLUSIONS Patients initiating ART at higher CD4 counts may be at increased risk for LTFU. With programmes initiating patients at higher CD4 counts, models of ART delivery need to be reoriented to support long-term retention.
Resumo:
Les réseaux d'entreprises formatrices constituent un modèle du système de formation professionnelle en alternance Suisse. Petites et moyennes entreprises peuvent ainsi mutualiser la formation des apprentis. Quelles raisons poussent les entreprises à participer à ce nouveau type d’organisation ? Quels conflits et tensions naissent au sein de ces réseaux ? Les analyses s’appuient sur quatre cas de réseaux et sur la théorie de l'économie des conventions. Ces réseaux naissent d’une pluralité de motifs de participation, source d’insatisfaction dans les entreprises et de conflits dans les réseaux tout au long du parcours de formation.
Resumo:
Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.