951 resultados para Worst-case execution-time
Resumo:
There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.
Resumo:
Daily we cope with upcoming potentially disadvantageous events. Therefore, it makes sense to be prepared for the worst case. Such a 'pessimistic' bias is reflected in brain activation during emotion processing. Healthy individuals underwent functional neuroimaging while viewing emotional stimuli that were earlier cued ambiguously or unambiguously concerning their emotional valence. Presentation of ambiguously announced pleasant pictures compared with unambiguously announced pleasant pictures resulted in increased activity in the ventrolateral prefrontal, premotor and temporal cortex, and in the caudate nucleus. This was not the case for the respective negative conditions. This indicates that pleasant stimuli after ambiguous cueing provided 'unexpected' emotional input, resulting in the adaptation of brain activity. It strengthens the hypothesis of a 'pessimistic' bias of brain activation toward ambiguous emotional events.
Resumo:
Since we do not know what future holds for us, we prepare for expected emotional events in order to deal with a pleasant or threatening environment. From an evolutionary perspective, it makes sense to be particularly prepared for the worst-case scenario. We were interested to evaluate whether this assumption is reflected in the central nervous information processing associated with expecting visual stimuli of unknown emotional valence. While being scanned with functional magnetic resonance imaging, healthy subjects were cued to expect and then perceive visual stimuli with a known emotional valence as pleasant, unpleasant, and neutral, as well as stimuli of unknown valence that could have been either pleasant or unpleasant. While anticipating pictures of unknown valence, the activity of emotion processing brain areas was similar to activity associated with expecting unpleasant pictures, but there were no areas in which the activity was similar to the activity when expecting pleasant pictures. The activity of the revealed regions, including bilateral insula, right inferior frontal gyrus, medial thalamus, and red nucleus, further correlated with the individual ratings of mood: the worse the mood, the higher the activity. These areas are supposedly involved in a network for internal adaptation and preparation processes in order to act according to potential or certain unpleasant events. Their activity appears to reflect a 'pessimistic' bias by anticipating the events of unknown valence to be unpleasant.
Resumo:
BACKGROUND: Contemporary pacemakers (PMs) are powered by primary batteries with a limited energy-storing capacity. PM replacements because of battery depletion are common and unpleasant and bear the risk of complications. Batteryless PMs that harvest energy inside the body may overcome these limitations. OBJECTIVE: The goal of this study was to develop a batteryless PM powered by a solar module that converts transcutaneous light into electrical energy. METHODS: Ex vivo measurements were performed with solar modules placed under pig skin flaps exposed to different irradiation scenarios (direct sunlight, shade outdoors, and indoors). Subsequently, 2 sunlight-powered PMs featuring a 4.6-cm2 solar module were implanted in vivo in a pig. One prototype, equipped with an energy buffer, was run in darkness for several weeks to simulate a worst-case scenario. RESULTS: Ex vivo, median output power of the solar module was 1963 μW/cm2 (interquartile range [IQR] 1940-2107 μW/cm2) under direct sunlight exposure outdoors, 206 μW/cm2 (IQR 194-233 μW/cm2) in shade outdoors, and 4 μW/cm2 (IQR 3.6-4.3 μW/cm2) indoors (current PMs use approximately 10-20 μW). Median skin flap thickness was 4.8 mm. In vivo, prolonged SOO pacing was performed even with short irradiation periods. Our PM was able to pace continuously at a rate of 125 bpm (3.7 V at 0.6 ms) for 1½ months in darkness. CONCLUSION: Tomorrow's PMs might be batteryless and powered by sunlight. Because of the good skin penetrance of infrared light, a significant amount of energy can be harvested by a subcutaneous solar module even indoors. The use of an energy buffer allows periods of darkness to be overcome.
Resumo:
BACKGROUND Since the pioneering work of Jacobson and Suarez, microsurgery has steadily progressed and is now used in all surgical specialities, particularly in plastic surgery. Before performing clinical procedures it is necessary to learn the basic techniques in the laboratory. OBJECTIVE To assess an animal model, thereby circumventing the following issues: ethical rules, cost, anesthesia and training time. METHODS Between July 2012 and September 2012, 182 earthworms were used for 150 microsurgical trainings to simulate discrepancy microanastomoses. Training was undertaken over 10 weekly periods. Each training session included 15 simulations of microanastomoses performed using the Harashina technique (earthworm diameters >1.5 mm [n=5], between 1.0 mm and 1.5 mm [n=5], and <1.0 mm [n=5]). The technique is presented and documented. A linear model with main variable as the number of the week (as a numeric covariate) and the size of the animal (as a factor) was used to determine the trend in time of anastomosis over subsequent weeks as well as differences between the different size groups. RESULTS The linear model showed a significant trend (P<0.001) in time of anastomosis in the course of the training, as well as significant differences (P<0.001) between the groups of animal of different sizes. For diameter >1.5 mm, mean anastomosis time decreased from 19.6±1.9 min to 12.6±0.7 min between the first and last week of training. For training involving smaller diameters, the results showed a reduction in execution time of 36.1% (P<0.01) (diameter between 1.0 mm and 1.5 mm) and 40.6% (P<0.01) (diameter <1.0 mm) between the first and last weeks. The study demonstrates an improvement in the dexterity and speed of nodes' execution. CONCLUSION The earthworm appears to be a reliable experimental model for microsurgical training of discrepancy microanastomoses. Its numerous advantages, as discussed in the present report, show that this model of training will significantly grow and develop in the near future.
Resumo:
We introduce a multistable subordinator, which generalizes the stable subordinator to the case of time-varying stability index. This enables us to define a multifractional Poisson process. We study properties of these processes and establish the convergence of a continuous-time random walk to the multifractional Poisson process.
Resumo:
PURPOSE The implementation of genomic-based medicine is hindered by unresolved questions regarding data privacy and delivery of interpreted results to health-care practitioners. We used DNA-based prediction of HIV-related outcomes as a model to explore critical issues in clinical genomics. METHODS We genotyped 4,149 markers in HIV-positive individuals. Variants allowed for prediction of 17 traits relevant to HIV medical care, inference of patient ancestry, and imputation of human leukocyte antigen (HLA) types. Genetic data were processed under a privacy-preserving framework using homomorphic encryption, and clinical reports describing potentially actionable results were delivered to health-care providers. RESULTS A total of 230 patients were included in the study. We demonstrated the feasibility of encrypting a large number of genetic markers, inferring patient ancestry, computing monogenic and polygenic trait risks, and reporting results under privacy-preserving conditions. The average execution time of a multimarker test on encrypted data was 865 ms on a standard computer. The proportion of tests returning potentially actionable genetic results ranged from 0 to 54%. CONCLUSIONS The model of implementation presented herein informs on strategies to deliver genomic test results for clinical care. Data encryption to ensure privacy helps to build patient trust, a key requirement on the road to genomic-based medicine.Genet Med advance online publication 14 January 2016Genetics in Medicine (2016); doi:10.1038/gim.2015.167.
Resumo:
Radiation therapy for patients with intact cervical cancer is frequently delivered using primary external beam radiation therapy (EBRT) followed by two fractions of intracavitary brachytherapy (ICBT). Although the tumor is the primary radiation target, controlling microscopic disease in the lymph nodes is just as critical to patient treatment outcome. In patients where gross lymphadenopathy is discovered, an extra EBRT boost course is delivered between the two ICBT fractions. Since the nodal boost is an addendum to primary EBRT and ICBT, the prescription and delivery must be performed considering previously delivered dose. This project aims to address the major issues of this complex process for the purpose of improving treatment accuracy while increasing dose sparing to the surrounding normal tissues. Because external beam boosts to involved lymph nodes are given prior to the completion of ICBT, assumptions must be made about dose to positive lymph nodes from future implants. The first aim of this project was to quantify differences in nodal dose contribution between independent ICBT fractions. We retrospectively evaluated differences in the ICBT dose contribution to positive pelvic nodes for ten patients who had previously received external beam nodal boost. Our results indicate that the mean dose to the pelvic nodes differed by up to 1.9 Gy between independent ICBT fractions. The second aim is to develop and validate a volumetric method for summing dose of the normal tissues during prescription of nodal boost. The traditional method of dose summation uses the maximum point dose from each modality, which often only represents the worst case scenario. However, the worst case is often an exaggeration when highly conformal therapy methods such as intensity modulated radiation therapy (IMRT) are used. We used deformable image registration algorithms to volumetrically sum dose for the bladder and rectum and created a voxel-by-voxel validation method. The mean error in deformable image registration results of all voxels within the bladder and rectum were 5 and 6 mm, respectively. Finally, the third aim explored the potential use of proton therapy to reduce normal tissue dose. A major physical advantage of protons over photons is that protons stop after delivering dose in the tumor. Although theoretically superior to photons, proton beams are more sensitive to uncertainties caused by interfractional anatomical variations, and must be accounted for during treatment planning to ensure complete target coverage. We have demonstrated a systematic approach to determine population-based anatomical margin requirements for proton therapy. The observed optimal treatment angles for common iliac nodes were 90° (left lateral) and 180° (posterior-anterior [PA]) with additional 0.8 cm and 0.9 cm margins, respectively. For external iliac nodes, lateral and PA beams required additional 0.4 cm and 0.9 cm margins, respectively. Through this project, we have provided radiation oncologists with additional information about potential differences in nodal dose between independent ICBT insertions and volumetric total dose distribution in the bladder and rectum. We have also determined the margins needed for safe delivery of proton therapy when delivering nodal boosts to patients with cervical cancer.
Resumo:
En este trabajo se evalúa el efecto de la intensidad y época de raleo manual sobre el rendimiento y el tamaño de frutos en manzano (Malus domestica Bork cv. Gala). Árboles con cargas similares fueron raleados manualmente dejando 4,6 y 2,56 frutos/cm2 área de sección transversal de tronco (ASTT) a los 32 días después de plena flor (DDPF) y con 3,87 y 2,39 frutos/cm2 ASTT a los 39 DDPF. El mejor rendimiento/planta se obtuvo con la mayor proporción de calibres pequeños en el tratamiento con mayor carga. La época de raleo no afectó el rendimiento ni el peso promedio de fruto. El menor tamaño de frutos se obtuvo con el testigo sin ralear. El peso promedio de fruto aumentó significativamente cuando se redujo la carga a 2,36 frutos/cm2 ASTT sin diferencia entre épocas.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
The purpose of this study is to determine the critical wear levels of the contact wire of the catenary on metropolitan lines. The study has focussed on the zones of contact wire where localised wear is produced, normally associated with the appearance of electric arcs. To this end, a finite element model has been developed to study the dynamics of pantograph-catenary interaction. The model includes a zone of localised wear and a singularity in the contact wire in order to simulate the worst case scenario from the point of view of stresses. In order to consider the different stages in the wire wear process, different depths and widths of the localised wear zone were defined. The results of the dynamic simulations performed for each stage of wear let the area of the minimum resistant section of the contact wire be determined for which stresses are greater than the allowable stress. The maximum tensile stress reached in the contact wire shows a clear sensitivity to the size of the local wear zone, defined by its width and depth. In this way, if the wear measurements taken with an overhead line recording vehicle are analysed, it will be possible to calculate the potential breakage risk of the wire. A strong dependence of the tensile forces of the contact wire has also been observed. These results will allow priorities to be set for replacing the most critical sections of wire, thereby making maintenance much more efficient. The results obtained show that the wire replacement criteria currently borne in mind have turned out to be appropriate, although in some wear scenarios these criteria could be adjusted even more, and so prolong the life cycle of the contact wire.
Resumo:
Abstract. We study the problem of efficient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a top-down framework. In particular, we define the new abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. We use cliques both as an alternative representation and as widening, defining several widening operators. Our experimental evaluation supports the conclusión that, for inferring set-sharing, as it was the case for inferring pair-sharing, precisión losses are limited, while useful efficieney gains are obtained. We also derive useful conclusions regarding the interactions between thresholds, precisión, efficieney and cost of widening. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.
Resumo:
This paper presents and proves some fundamental results for independent and-parallelism (IAP). First, the paper treats the issues of correctness and efficiency: after defining strict and non-strict goal independence, it is proved that if strictly independent goals are executed in parallel the solutions obtained are the same as those produced by standard sequential execution. It is also shown that, in the absence of failure, the parallel proof procedure doesn't genérate any additional work (with respect to standard SLDresolution) while the actual execution time is reduced. The same results hold even if non-strictly independent goals are executed in parallel, provided a trivial rewriting of such goals is performed. In addition, and most importantly, treats the issue of compile-time generation of IAP by proposing conditions, to be written at compile-time, to efficiently check strict and non-strict goal independence at run-time and proving the sufficiency of such conditions. It is also shown how simpler conditions can be constructed if some information regarding the binding context of the goals to be executed in parallel is available to the compiler trough either local or program-level analysis. These results therefore provide a formal basis for the automatic compile-time generation of IAP. As a corollary of such results, the paper also proves that negative goals are always non-strictly independent, and that goals which share a first occurrence of an existential variable are never independent.
Resumo:
In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.
Resumo:
We study the problem of efñcient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a topdown framework. In particular, we define the abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. Our experimental evaluation supports the conclusión that, for inferring set-sharing, as it was the case for inferring pair-sharing, precisión losses are limited, while useful efñciency gains are obtained. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.