883 resultados para Performance evolution due time
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
In a prospective memory task responding to a prospective memory target involves switching between ongoing and prospective memory task which can result in a slowing of subsequent ongoing task performance (i.e., an after-effect). Moreover, a slowing can also occur when prospective memory targets occur after the prospective memory task is deactivated (i.e., another after-effect). In this study, we investigated both after-effects within the same study. Moreover, we also tested whether the latter after-effects even occur on subsequent ongoing task trials. The results show, in fact, after-effects of all kinds. Thus, (1) correctly responding to prospective memory targets results in after-effects, a so far neglected cost on ongoing task performance, (2) responding to deactivated prospective memory targets also slows down performance, probably due to the involuntary retrieval of the intention, and (3) this slowing is present even on subsequent ongoing task trials, suggesting that even deactivated intentions are sufficient to induce a conflict that requires subsequent adaptation. Overall, these results indicate that performance slowing in a prospective memory experiment includes various kinds of sources, not only monitoring cost, and these sources may be understood best in terms of conflict adaptation.
Resumo:
Context. OSIRIS, the scientific imaging system onboard the ESA Rosetta spacecraft, has been imaging the nucleus of comet 67P/Churyumov-Gerasimenko and its dust and gas environment since March 2014. The images serve different scientific goals, from morphology and composition studies of the nucleus surface, to the motion and trajectories of dust grains, the general structure of the dust coma, the morphology and intensity of jets, gas distribution, mass loss, and dust and gas production rates. Aims. We present the calibration of the raw images taken by OSIRIS and address the accuracy that we can expect in our scientific results based on the accuracy of the calibration steps that we have performed. Methods. We describe the pipeline that has been developed to automatically calibrate the OSIRIS images. Through a series of steps, radiometrically calibrated and distortion corrected images are produced and can be used for scientific studies. Calibration campaigns were run on the ground before launch and throughout the years in flight to determine the parameters that are used to calibrate the images and to verify their evolution with time. We describe how these parameters were determined and we address their accuracy. Results. We provide a guideline to the level of trust that can be put into the various studies performed with OSIRIS images, based on the accuracy of the image calibration.
Resumo:
Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks. In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions. If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store.
Resumo:
Although the area under the receiver operating characteristic (AUC) is the most popular measure of the performance of prediction models, it has limitations, especially when it is used to evaluate the added discrimination of a new biomarker in the model. Pencina et al. (2008) proposed two indices, the net reclassification improvement (NRI) and integrated discrimination improvement (IDI), to supplement the improvement in the AUC (IAUC). Their NRI and IDI are based on binary outcomes in case-control settings, which do not involve time-to-event outcome. However, many disease outcomes are time-dependent and the onset time can be censored. Measuring discrimination potential of a prognostic marker without considering time to event can lead to biased estimates. In this dissertation, we have extended the NRI and IDI to survival analysis settings and derived the corresponding sample estimators and asymptotic tests. Simulation studies were conducted to compare the performance of the time-dependent NRI and IDI with Pencina’s NRI and IDI. For illustration, we have applied the proposed method to a breast cancer study.^ Key words: Prognostic model, Discrimination, Time-dependent NRI and IDI ^
Resumo:
Information technology (IT) in the hospital organization is fast becoming a key asset, particularly in light of recent reform legislation in the United States calling for expanding the role of IT in our health care system. Future payment reductions to hospitals included in current health reform are based on expected improvements in hospital operating efficiency. Since over half of hospital expenses are for labor, improved efficiency in use of labor resources can be critical in meeting this challenge. Policy makers have touted the value of IT investments to improve efficiency in response to payment reductions. ^ This study was the first to directly examine the relationship between electronic health record (EHR) technology and staffing efficiency in hospitals. As the hospital has a myriad of outputs for inpatient and outpatient care, efficiency was measured using an industry standard performance metric – full time equivalent employees per adjusted occupied bed (FTE/AOB). Three hypotheses were tested in this study.^ To operationalize EHR technology adoption, we developed three constructs to model adoption, each of which was tested by separate hypotheses. The first hypothesis that a larger number of EHR applications used by a hospital would be associated with greater staffing efficiency (or lower values of FTE/AOB) was not accepted. Association between staffing efficiency and specific EHR applications was the second hypothesis tested and accepted with some applications showing significant impacts on observed values for FTE/AOB. Finally, the hypothesis that the longer an EHR application was used in a hospital would be associated with greater labor efficiency was not accepted as the model showed few statistically significant relationships to FTE/AOB performance. Generally, there does not appear a strong relationship between EHR usage and improved labor efficiency in hospitals.^ While returns on investment from EHR usage may not come from labor efficiencies, they may be better sought using measures of quality, contribution to an efficient and effective local health care system, and improved customer satisfaction through greater patient throughput.^
Resumo:
En este trabajo nos proponemos realizar un estudio sobre los servicios de referencia virtual por chat. Abordamos los distintos conceptos y su evolución a través del tiempo; analizamos los diferentes servicios ofrecidos por bibliotecas universitarias del mundo, y por último, planteamos una serie de recomendaciones para la implementación de un servicio de referencia virtual por chat en bibliotecas universitarias argentinas
Resumo:
En este trabajo nos proponemos realizar un estudio sobre los servicios de referencia virtual por chat. Abordamos los distintos conceptos y su evolución a través del tiempo; analizamos los diferentes servicios ofrecidos por bibliotecas universitarias del mundo, y por último, planteamos una serie de recomendaciones para la implementación de un servicio de referencia virtual por chat en bibliotecas universitarias argentinas
Resumo:
En este trabajo nos proponemos realizar un estudio sobre los servicios de referencia virtual por chat. Abordamos los distintos conceptos y su evolución a través del tiempo; analizamos los diferentes servicios ofrecidos por bibliotecas universitarias del mundo, y por último, planteamos una serie de recomendaciones para la implementación de un servicio de referencia virtual por chat en bibliotecas universitarias argentinas
Resumo:
Global scale impacts modify the physical or thermal state of a substantial fraction of a target asteroid. Specific effects include accretion, family formation, reshaping, mixing and layering, shock and frictional heating, fragmentation, material compaction, dilatation, stripping of mantle and crust, and seismic degradation. Deciphering the complicated record of global scale impacts, in asteroids and meteorites, will lead us to understand the original planet-forming process and its resultant populations, and their evolution in time as collisions became faster and fewer. We provide a brief overview of these ideas, and an introduction to models.
Resumo:
Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.
Resumo:
To date, the majority of quality controls performed at PV plants are based on the measurement of a small sample of individual modules. Consequently, there is very little representative data on the real Standard Test Conditions (STC) power output values for PV generators. This paper presents the power output values for more than 1300 PV generators having a total installed power capacity of almost 15.3 MW. The values were obtained by the INGEPER-UPNA group, in collaboration with the IES-UPM, through a study to monitor the power output of a number of PV plants from 2006 to 2009. This work has made it possible to determine, amongst other things, the power dispersion that can be expected amongst generators made by different manufacturers, amongst generators made by the same manufacturer but comprising modules of different nameplate ratings and also amongst generators formed by modules with the same characteristics. The work also analyses the STC power output evolution over time in the course of this 4-year study. The values presented here could be considered to be representative of generators with fault-free modules.
Resumo:
The goal of this communication is to offer, through computer-aided design tools, a methodology to recover and virtually reconstruct disappeared buildings of our industrial historical heritage. It will be applied to the case of the flour factory "El Puente Colgante" (The Suspended Bridge) in Aranjuez, which was demolished in 2001. The process is as follows: After a historical analysis of the evolution in time of the flour factory, a field work provides data allowing an info graphic reconstruction of the factory. Once this information has been processed, a lifting of the current state is made with AutoCAD, and a three-dimensional model is built with the Rhinoceros application. Then images of the ensemble are obtained with the applications Rhinoceros and V-Ray, ending with a postproduction with Photoshop. The proposed methodology has permitted to obtain a three-dimensional model of the flour factory ?El Puente Colgante? in Aranjuez, with an accurate virtual reconstruction of its original state prior to demolition. The procedure exposed is susceptible to be generalized for any other example of industrial architecture.
Resumo:
Value chain in agriculture is a current issue affecting from farmers to consumers. It questions important issues as profitability, and even though continuity of certain sectors. Although there has been an evolution along time in the structure and concentration of intermediate and final levels of the value chain between distribution and retail sector, a similar evolution seems not to arrive at the initial level of the chain, the production sector. This produces large imbalances in power and leverage between levels of the value chain that could imply several problems for rural actors. Relatively little attention has been paid to possible market distortions caused by the high level of concentration distribution side of the agrifood system.
Resumo:
The analysis of the harmonic terms related to the rotational speed of a cup anemometer is a way to detect anomalies such as wear and tear, rotor non-symmetries (rotor damage) or problems at the output signal system. The research already done in this matter at the IDR/UPM Institute is now taken to cup anemometers working on the field. A 1-2 year testing campaign is being carried out in collaboration with Kintech Engineering. 2 Thies First Class Advanced installed at 58 m and 73 m height in a meteorology tower are constantly monitored. The results will be correlated to the anemometer performance evolution studied through several calibrations planned to be performed along the testing campaign.