951 resultados para Worst-case execution-time
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Este artigo propõe a adoção de um modelo numérico prognóstico para estimar a variável "tempo de execução" para empreendimentos públicos de forma objetiva. O trabalho de campo consistiu na aplicação de métodos estatísticos para analisar dados de obras licitadas e executadas durante o período de 2006 a 2009 na Universidade Federal do Pará (UFPA). A análise de dados envolveu cálculos de regressões lineares e transformadas das funções. Após estratificação e tratamento inicial dos dados, os elementos adotados para construção do modelo final se restringiram a 102 obras de um total de 225 originariamente pesquisadas, resultando nos seguintes parâmetros estatísticos: coeficiente de correlação (R) de 0,899; coeficiente de determinação (R2) de 0,808; coeficiente de determinação ajustado (R2 ajustado) de 0,796 e erro padrão (Se) de 0,41. Estes parâmetros indicam forte correlação linear entre as variáveis, indicando que 79,60% da variação do tempo para executar uma obra pública podem ser causadas pela variação, em conjunto, das variáveis área construída, custo orçado, capacidade técnica operacional do contratante, capacidade operacional da empresa, tipologia de serviço e estação do ano.
Resumo:
Pós-graduação em Anestesiologia - FMB
Resumo:
In Brazil there are many cases of cities that suffer from flooding. It often destroys much of the structure of the city, and isolates many families. Relying on emergency measures for these cases and similar cases is of utmost importance. This paper proposes the development of a project of a timber bridge that can be used in emergency situations such as occurs in situations of floods and especially on side roads. It is considered one type of structural bridge which has, among other characteristics, an easy transportation and assembly of the elements. The development is carried out, at this early stage of the project, only about the verification and sizing of structural elements of the superstructure of the bridge. For this purpose, it relies on computer programs, and fundamentally on PCFrame Visual Taco. The first allows you to model the structure and determine the efforts of calculating the elements, and the second one assists at the scaling and the verifications in accordance with the Brazilian technical standards for timber bridges. The wood used in the project comes from the tree Eucalyptus saligna, which is easy to acquaint and manipulate and comes from the region Vale do Paraíba. The bridge in this case of application should have the characteristics mentioned below: short execution time, simplicity of structure and an assembly of a relatively low cost
Resumo:
Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.
Resumo:
Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.
Resumo:
Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.
Resumo:
Aseptic loosening of metal implants is mainly attributed to the formation of metal degradation products. These include particulate debris and corrosion products, such as metal ions (anodic half-reaction) and ROS (cathodic half-reaction). While numerous clinical studies describe various adverse effects of metal degradation products, detailed knowledge of metal-induced cellular reactions, which might be important for possible therapeutic intervention, is not comprehensive. Since endothelial cells are involved in inflammation and angiogenesis, two processes which are critical for wound healing and integration of metal implants, the effects of different metal alloys and their degradation products on these cells were investigated. Endothelial cells on Ti6Al4V alloy showed signs of oxidative stress, which was similar to the response of endothelial cells to cathodic partial reaction of corrosion induced directly on Ti6Al4V surfaces. Furthermore, oxidative stress on Ti6Al4V alloy reduced the pro-inflammatory stimulation of endothelial cells by TNF-α and LPS. Oxidative stress and other stress-related responses were observed in endothelial cells in contact with Co28Cr6Mo alloy. Importantly, these features could be reduced by coating Co28Cr6Mo with a TiO2 layer, thus favouring the use of such surface modification in the development of medical devices for orthopaedic surgery. The reaction of endothelial cells to Co28Cr6Mo alloy was partially similar to the effects exerted by Co2+, which is known to be released from metal implants. Co2+ also induced ROS formation and DNA damage in endothelial cells. This correlated with p53 and p21 up-regulation, indicating the possibility of cell cycle arrest. Since CoCl2 is used as an hypoxia-mimicking agent, HIF-1α-dependence of cellular responses to Co2+ was studied in comparison to anoxia-induced effects. Although important HIF-1α-dependent genes were identified, a more detailed analysis of microarray data will be required to provide additional information about the mechanisms of Co2+ action. All these reactions of endothelial cells to metal degradation products might play their role in the complex processes taking place in the body following metal device implantation. In the worst case this can lead to aseptic loosening of the implant and requirement for revision surgery. Knowledge of molecular mechanisms of metal-induced responses will hopefully provide the possibility to interfere with undesirable processes at the implant/tissue interface, thus extending the life-time of the implant and the overall success of metal implant applications.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.
Resumo:
This study tests whether cognitive failures mediate effects of work-related time pressure and time control on commuting accidents and near-accidents. Participants were 83 employees (56% female) who each commuted between their regular place of residence and place of work using vehicles. The Workplace Cognitive Failure Scale (WCFS) asked for the frequency of failure in memory function, failure in attention regulation, and failure in action execution. Time pressure and time control at work were assessed by the Instrument for Stress Oriented Task Analysis (ISTA). Commuting accidents in the last 12 months were reported by 10% of participants, and half of the sample reported commuting near-accidents in the last 4 weeks. Cognitive failure significantly mediated the influence of time pressure at work on near-accidents even when age, gender, neuroticism, conscientiousness, commuting duration, commuting distance, and time pressure during commuting were controlled for. Time control was negatively related to cognitive failure and neuroticism, but no association with commuting accidents or near-accidents was found. Time pressure at work is likely to increase cognitive load. Time pressure might, therefore, increase cognitive failures during work and also during commuting. Hence, time pressure at work can decrease commuting safety. The result suggests a reduction of time pressure at work should improve commuting safety.
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
The impact of global climate change on coral reefs is expected to be most profound at the sea surface, where fertilization and embryonic development of broadcast-spawning corals takes place. We examined the effect of increased temperature and elevated CO2 levels on the in vitro fertilization success and initial embryonic development of broadcast-spawning corals using a single male:female cross of three different species from mid- and high-latitude locations: Lyudao, Taiwan (22° N) and Kochi, Japan (32° N). Eggs were fertilized under ambient conditions (27 °C and 500 µatm CO2) and under conditions predicted for 2100 (IPCC worst case scenario, 31 °C and 1000 µatm CO2). Fertilization success, abnormal development and early developmental success were determined for each sample. Increased temperature had a more profound influence than elevated CO2. In most cases, near-future warming caused a significant drop in early developmental success as a result of decreased fertilization success and/or increased abnormal development. The embryonic development of the male:female cross of A. hyacinthus from the high-latitude location was more sensitive to the increased temperature (+4 °C) than the male:female cross of A. hyacinthus from the mid-latitude location. The response to the elevated CO2 level was small and highly variable, ranging from positive to negative responses. These results suggest that global warming is a more significant and universal stressor than ocean acidification on the early embryonic development of corals from mid- and high-latitude locations.
Resumo:
Anthropogenically-modulated reductions in pH, termed ocean acidification, could pose a major threat to the physiological performance, stocks, and biodiversity of calcifiers and may devalue their ecosystem services. Recent debate has focussed on the need to develop approaches to arrest the potential negative impacts of ocean acidification on ecosystems dominated by calcareous organisms. In this study, we demonstrate the role of a discrete (i.e. diffusion) boundary layer (DBL), formed at the surface of some calcifying species under slow flows, in buffering them from the corrosive effects of low pH seawater. The coralline macroalga Arthrocardia corymbosa was grown in a multifactorial experiment with two mean pH levels (8.05 'ambient' and 7.65 a worst case 'ocean acidification' scenario projected for 2100), each with two levels of seawater flow (fast and slow, i.e. DBL thin or thick). Coralline algae grown under slow flows with thick DBLs (i.e., unstirred with regular replenishment of seawater to their surface) maintained net growth and calcification at pH 7.65 whereas those in higher flows with thin DBLs had net dissolution. Growth under ambient seawater pH (8.05) was not significantly different in thin and thick DBL treatments. No other measured diagnostic (recruit sizes and numbers, photosynthetic metrics, %C, %N, %MgCO3) responded to the effects of reduced seawater pH. Thus, flow conditions that promote the formation of thick DBLs, may enhance the subsistence of calcifiers by creating localised hydrodynamic conditions where metabolic activity ameliorates the negative impacts of ocean acidification.