958 resultados para failure time model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciências Biológicas (Biologia Vegetal) - IBRC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Linguística e Língua Portuguesa - FCLAR

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Matematica Aplicada e Computacional - FCT

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In estimation of a survival function, current status data arises when the only information available on individuals is their survival status at a single monitoring time. Here we briefly review extensions of this form of data structure in two directions: (i) doubly censored current status data, where there is incomplete information on the origin of the failure time random variable, and (ii) current status information on more complicated stochastic processes. Simple examples of these data forms are presented for motivation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In medical follow-up studies, ordered bivariate survival data are frequently encountered when bivariate failure events are used as the outcomes to identify the progression of a disease. In cancer studies interest could be focused on bivariate failure times, for example, time from birth to cancer onset and time from cancer onset to death. This paper considers a sampling scheme where the first failure event (cancer onset) is identified within a calendar time interval, the time of the initiating event (birth) can be retrospectively confirmed, and the occurrence of the second event (death) is observed sub ject to right censoring. To analyze this type of bivariate failure time data, it is important to recognize the presence of bias arising due to interval sampling. In this paper, nonparametric and semiparametric methods are developed to analyze the bivariate survival data with interval sampling under stationary and semi-stationary conditions. Numerical studies demonstrate the proposed estimating approaches perform well with practical sample sizes in different simulated models. We apply the proposed methods to SEER ovarian cancer registry data for illustration of the methods and theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Anti-TNFα agents are commonly used for ulcerative colitis (UC) therapy in the event of non-response to conventional strategies or as colon-salvaging therapy. The objectives were to assess the appropriateness of biological therapies for UC patients and to study treatment discontinuation over time, according to appropriateness of treatment, as a measure of outcome. METHODS We selected adult ulcerative colitis patients from the Swiss IBD cohort who had been treated with anti-TNFα agents. Appropriateness of the first-line anti-TNFα treatment was assessed using detailed criteria developed during the European Panel on the Appropriateness of Therapy for UC. Treatment discontinuation as an outcome was assessed for categories of appropriateness. RESULTS Appropriateness of the first-line biological treatment was determined in 186 UC patients. For 64% of them, this treatment was considered appropriate. During follow-up, 37% of all patients discontinued biological treatment, 17% specifically because of failure. Time-to-failure of treatment was significantly different among patients on an appropriate biological treatment compared to those for whom the treatment was considered not appropriate (p=0.0007). Discontinuation rate after 2years was 26% compared to 54% between those two groups. Patients on inappropriate biological treatment were more likely to have severe disease, concomitant steroids and/or immunomodulators. They were also consistently more likely to suffer a failure of efficacy and to stop therapy during follow-up. CONCLUSION Appropriateness of first-line anti-TNFα therapy results in a greater likelihood of continuing with the therapy. In situations where biological treatment is uncertain or inappropriate, physicians should consider other options instead of prescribing anti-TNFα agents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Significant numbers of U.S. commercial bank failures in the late 1980s and early 1990s raise important questions about bank performance. We develop a failure-prediction model for Connecticut banks to examine events in 1991 and 1992. We adopt data envelopment analysis to derive measures of managerial efficiency. Our findings can be briefly stated. Managerial inefficiency does not provide significant information to explain Connecticut bank failures. Portfolio variables do generally contain significant information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Earth's climate abruptly warmed by 5-8 °C during the Palaeocene-Eocene thermal maximum (PETM), about 55.5 million years ago**1,2. This warming was associated with a massive addition of carbon to the ocean-atmosphere system, but estimates of the Earth systemresponse to this perturbation are complicated by widely varying estimates of the duration of carbon release, which range from less than a year to tens of thousands of years. In addition the source of the carbon, and whether it was released as a single injection or in several pulses, remains the subject of debate**2-4. Here we present a new high-resolution carbon isotope record from terrestrial deposits in the Bighorn Basin (Wyoming, USA) spanning the PETM, and interpret the record using a carbon-cycle boxmodel of the ocean-atmosphere-biosphere system.Our record shows that the beginning of the PETMis characterized by not one but two distinct carbon release events, separated by a recovery to background values. To reproduce this pattern, our model requires two discrete pulses of carbon released directly to the atmosphere, at average rates exceeding 0.9 Pg C yr**-1, with the first pulse lasting fewer than 2,000 years.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The glacial climate system transitioned rapidly between cold (stadial) and warm (interstadial) conditions in the Northern Hemisphere. This variability, referred to as Dansgaard-Oeschger variability, is widely believed to arise from perturbations of the Atlantic Meridional Overturning Circulation. Evidence for such changes during the longer Heinrich stadials has been identified, but direct evidence for overturning circulation changes during Dansgaard-Oeschger events has proven elusive. Here we reconstruct bottom water [CO3]2- variability from B/Ca ratios of benthic foraminifera and indicators of sedimentary dissolution, and use these reconstructions to infer the flow of northern-sourced deep water to the deep central sub-Antarctic Atlantic Ocean. We find that nearly every Dansgaard-Oeschger interstadial is accompanied by a rapid incursion of North Atlantic Deep Water into the deep South Atlantic. Based on these results and transient climate model simulations, we conclude that North Atlantic stadial-interstadial climate variability was associated with significant Atlantic overturning circulation changes that were rapidly transmitted across the Atlantic. However, by demonstrating the persistent role of Atlantic overturning circulation changes in past abrupt climate variability, our reconstructions of carbonate chemistry further indicate that the carbon cycle response to abrupt climate change was not a simple function of North Atlantic overturning.