985 resultados para Two-sided Caputo derivative
Resumo:
PURPOSE In patients with hormone-dependent postmenopausal breast cancer, standard adjuvant therapy involves 5 years of the nonsteroidal aromatase inhibitors anastrozole and letrozole. The steroidal inhibitor exemestane is partially non-cross-resistant with nonsteroidal aromatase inhibitors and is a mild androgen and could prove superior to anastrozole regarding efficacy and toxicity, specifically with less bone loss. PATIENTS AND METHODS We designed an open-label, randomized, phase III trial of 5 years of exemestane versus anastrozole with a two-sided test of superiority to detect a 2.4% improvement with exemestane in 5-year event-free survival (EFS). Secondary objectives included assessment of overall survival, distant disease-free survival, incidence of contralateral new primary breast cancer, and safety. RESULTS In the study, 7,576 women (median age, 64.1 years) were enrolled. At median follow-up of 4.1 years, 4-year EFS was 91% for exemestane and 91.2% for anastrozole (stratified hazard ratio, 1.02; 95% CI, 0.87 to 1.18; P = .85). Overall, distant disease-free survival and disease-specific survival were also similar. In all, 31.6% of patients discontinued treatment as a result of adverse effects, concomitant disease, or study refusal. Osteoporosis/osteopenia, hypertriglyceridemia, vaginal bleeding, and hypercholesterolemia were less frequent on exemestane, whereas mild liver function abnormalities and rare episodes of atrial fibrillation were less frequent on anastrozole. Vasomotor and musculoskeletal symptoms were similar between arms. CONCLUSION This first comparison of steroidal and nonsteroidal classes of aromatase inhibitors showed neither to be superior in terms of breast cancer outcomes as 5-year initial adjuvant therapy for postmenopausal breast cancer by two-way test. Less toxicity on bone is compatible with one hypothesis behind MA.27 but requires confirmation. Exemestane should be considered another option as up-front adjuvant therapy for postmenopausal hormone receptor-positive breast cancer.
Resumo:
BACKGROUND Surgical site infections are the most common hospital-acquired infections among surgical patients. The administration of surgical antimicrobial prophylaxis reduces the risk of surgical site infections . The optimal timing of this procedure is still a matter of debate. While most studies suggest that it should be given as close to the incision time as possible, others conclude that this may be too late for optimal prevention of surgical site infections. A large observational study suggests that surgical antimicrobial prophylaxis should be administered 74 to 30 minutes before surgery. The aim of this article is to report the design and protocol of a randomized controlled trial investigating the optimal timing of surgical antimicrobial prophylaxis.Methods/design: In this bi-center randomized controlled trial conducted at two tertiary referral centers in Switzerland, we plan to include 5,000 patients undergoing general, oncologic, vascular and orthopedic trauma procedures. Patients are randomized in a 1:1 ratio into two groups: one receiving surgical antimicrobial prophylaxis in the anesthesia room (75 to 30 minutes before incision) and the other receiving surgical antimicrobial prophylaxis in the operating room (less than 30 minutes before incision). We expect a significantly lower rate of surgical site infections with surgical antimicrobial prophylaxis administered more than 30 minutes before the scheduled incision. The primary outcome is the occurrence of surgical site infections during a 30-day follow-up period (one year with an implant in place). When assuming a 5 surgical site infection risk with administration of surgical antimicrobial prophylaxis in the operating room, the planned sample size has an 80% power to detect a relative risk reduction for surgical site infections of 33% when administering surgical antimicrobial prophylaxis in the anesthesia room (with a two-sided type I error of 5%). We expect the study to be completed within three years. DISCUSSION The results of this randomized controlled trial will have an important impact on current international guidelines for infection control strategies in the hospital. Moreover, the results of this randomized controlled trial are of significant interest for patient safety and healthcare economics.Trial registration: This trial is registered on ClinicalTrials.gov under the identifier NCT01790529.
Resumo:
BACKGROUND It is unclear whether radial compared with femoral access improves outcomes in unselected patients with acute coronary syndromes undergoing invasive management. METHODS We did a randomised, multicentre, superiority trial comparing transradial against transfemoral access in patients with acute coronary syndrome with or without ST-segment elevation myocardial infarction who were about to undergo coronary angiography and percutaneous coronary intervention. Patients were randomly allocated (1:1) to radial or femoral access with a web-based system. The randomisation sequence was computer generated, blocked, and stratified by use of ticagrelor or prasugrel, type of acute coronary syndrome (ST-segment elevation myocardial infarction, troponin positive or negative, non-ST-segment elevation acute coronary syndrome), and anticipated use of immediate percutaneous coronary intervention. Outcome assessors were masked to treatment allocation. The 30-day coprimary outcomes were major adverse cardiovascular events, defined as death, myocardial infarction, or stroke, and net adverse clinical events, defined as major adverse cardiovascular events or Bleeding Academic Research Consortium (BARC) major bleeding unrelated to coronary artery bypass graft surgery. The analysis was by intention to treat. The two-sided α was prespecified at 0·025. The trial is registered at ClinicalTrials.gov, number NCT01433627. FINDINGS We randomly assigned 8404 patients with acute coronary syndrome, with or without ST-segment elevation, to radial (4197) or femoral (4207) access for coronary angiography and percutaneous coronary intervention. 369 (8·8%) patients with radial access had major adverse cardiovascular events, compared with 429 (10·3%) patients with femoral access (rate ratio [RR] 0·85, 95% CI 0·74-0·99; p=0·0307), non-significant at α of 0·025. 410 (9·8%) patients with radial access had net adverse clinical events compared with 486 (11·7%) patients with femoral access (0·83, 95% CI 0·73-0·96; p=0·0092). The difference was driven by BARC major bleeding unrelated to coronary artery bypass graft surgery (1·6% vs 2·3%, RR 0·67, 95% CI 0·49-0·92; p=0·013) and all-cause mortality (1·6% vs 2·2%, RR 0·72, 95% CI 0·53-0·99; p=0·045). INTERPRETATION In patients with acute coronary syndrome undergoing invasive management, radial as compared with femoral access reduces net adverse clinical events, through a reduction in major bleeding and all-cause mortality. FUNDING The Medicines Company and Terumo.
Resumo:
Objective: Minimizing resection and preserving leaflet tissue has been previously shown to be beneficial for mitral valve function and leaflet kinematics after repair of acute posterior leaflet prolapse in porcine valves. We examined the effects of different additional methods of mitral valve repair (neochordoplasty, ring annuloplasty, edge-to-edge repair and triangular resection) on hemodynamics at different heart rates in an experimental model. Methods: Severe acute P2 prolapse was created in eight porcine mitral valves by resecting the posterior marginal chordae. Valve hemodynamics was quantified under pulsatile conditions in an in vitro heart simulator before and after surgical manipulation. Mitral regurgitation was corrected using four different methods of repair on the same valve: neochordoplasty with expanded polytetrafluoroethylene sutures alone and together with ring annuloplasty, edge-to-edge repair and triangular resection, both with non-restrictive annuloplasty. Residual mitral valve leak, trans-valvular pressure gradients, flow and cardiac output were measured at 60 and 80 beats/min. A validated statistical linear mixed model was used to analyze the effect of treatment. The p values were calculated using a two-sided Wald test. Results: Only neochordoplasty with expanded polytetrafluoroethylene sutures but without ring annuloplasty achieved similar hemodynamics compared to those of the native mitral valve (p range 0.071-0.901). Trans-valvular diastolic pressure gradients were within a physiologic range but significantly higher than those of the native valve following neochordoplasty with ring annuloplasty (p=0.000), triangular resection (p=0.000) and edge-to-edge repair (p=0.000). Neochordoplasty alone was significantly better in terms of hemodynamic than neochordoplasty with a ring annuloplasty (p=0.000). These values were stable regardless of heart rate or ring size. Conclusions: Neochordoplasty without ring annuloplasty is the only repair technique able to achieve almost native physiological hemodynamics after correction of leaflet prolapse in a porcine experimental model of acute chordal rupture.
Resumo:
Resistance to current chemo- and radiation therapy is the principal problem in anticancer treatment. Although intensively investigated, the therapeutic outcome is still far from satisfactory. Among the multiple factors which contribute to the drug resistance in cancer cells, the involvement of autophagy is becoming more and more evident. Autophagy describes a cellular self-digestion process, in which cytoplasmic elements can be selectively engulfed and finally degraded in autophagolysosomes to supply nutrients and building blocks for the cells. Autophagy controls cellular homeostasis and can be induced in response to stresses, like hypoxia and growth factor withdrawal. Since the essential physiological function of autophagy is to maintain cellular metabolic balance, dysregulated autophagy has been found associated with multiple diseases, including cancer. Interestingly, the role of autophagy in cancer is two-sided; it can be pro- or antitumor. Autophagy can suppress tumor formation, for example, by controlling cell proliferation and the production of reactive oxygen species. On the other hand, autophagy can provide nutrients to the tumor cells to support tumor growth under nutrition-limiting conditions, thereby promoting tumor development. This ambivalent behavior is also evident in anticancer therapy: By inducing autophagic cell death, autophagy has been shown to potentiate the cytotoxicity of chemotherapeutic drugs, but autophagy has also been linked to drug resistance, since inhibiting autophagy has been found to sensitize tumor cells toward anticancer drug-induced cell death. In this chapter, we will focus on the dual role of autophagy in tumorigenesis and chemotherapy, will classify autophagy inducers and inhibitors used in anticancer treatment, and will discuss topics related to future drug development which have arisen.
Resumo:
BACKGROUND Impact of contemporary treatment of pre-invasive breast cancer (ductal carcinoma in situ [DCIS]) on long-term outcomes remains poorly defined. We aimed to evaluate national treatment trends for DCIS and to determine their impact on disease-specific (DSS) and overall survival (OS). METHODS The Surveillance, Epidemiology, and End Results (SEER) registry was queried for patients diagnosed with DCIS from 1991 to 2010. Treatment pattern trends were analyzed using Cochran-Armitage trend test. Survival analyses were performed using inverse probability weights (IPW)-adjusted competing risk analyses for DSS and Cox proportional hazard regression for OS. All tests performed were two-sided. RESULTS One hundred twenty-one thousand and eighty DCIS patients were identified. The greatest proportion of patients was treated with lumpectomy and radiation therapy (43.0%), followed by lumpectomy alone (26.5%) and unilateral (23.8%) or bilateral mastectomy (4.5%) with significant shifts over time. The rate of sentinel lymph node biopsy increased from 9.7% to 67.1% for mastectomy and from 1.4% to 17.8% for lumpectomy. Compared with mastectomy, OS was higher for lumpectomy with radiation (hazard ratio [HR] = 0.79, 95% confidence interval [CI] = 0.76 to 0.83, P < .001) and lower for lumpectomy alone (HR = 1.17, 95% CI = 1.13 to 1.23, P < .001). IPW-adjusted ten-year DSS was highest in lumpectomy with XRT (98.9%), followed by mastectomy (98.5%), and lumpectomy alone (98.4%). CONCLUSIONS We identified substantial shifts in treatment patterns for DCIS from 1991 to 2010. When outcomes between locoregional treatment options were compared, we observed greater differences in OS than DSS, likely reflecting both a prevailing patient selection bias as well as clinically negligible differences in breast cancer outcomes between groups.
Resumo:
Mycobacterium avium complex (MAC) is a ubiquitous organism responsible for most pulmonary and disseminated disease caused by non-tuberculosis (NTM) mycobacteria. Though MAC lung disease without predisposing factors is uncommon, in recent years it has been increasingly described in middle-aged and elderly women. Recognition and correct diagnosis, is often delayed due to the indolent nature of the disease. It is unclear if these women have significant clinical disease as or if their airways are simply colonized by the bacterium. This study describes the clinical presentation, identifies risk factors, and describes the clinical significance of MAC lung disease in HIV-negative women aged 50 or greater. ^ A hybrid study design utilizing both cross-sectional and case-control methodologies was used. A comparison population was selected from previously identified tuberculosis suspects found throughout Harris County. The study population had at least one acid fast bacillus pulmonary culture performed between 1/1/1998 and 12/31/2000 from a pulmonary source. Clinical presentation and symptoms were analyzed using a cross-sectional design. Past medical history and other risk factors were evaluated using a traditional case-control study design. Differences in categorical variables were estimated with the Chi Square or Fisher's Exact test as appropriate. Odds ratios and 95% confidence intervals were utilized to evaluate associations. Multivariate logistic regression was used to identify predictive factors for MAC. All statistical tests were two-sided and P-values <0.05 were considered statistically significant. ^ Culture confirmed MAC pulmonary cases were more likely to be white, have bronchiectasis, scoliosis, evidence of cavitation and pleural changes on chest radiography and granulomas on histopathologic examination than women whose pulmonary cultures were AFB negative. After controlling for selected risk factors, white race continued to be significantly associated with MAC lung disease (OR = 4.6, 95% CI = 2.3, 9.2). In addition, asthma history, smoking history and alcohol use were less likely to be evident among MAC cases in a multivariate analysis. Right upper and right middle lobe disease was further noted among clinically significant cases. Based on population data, MAC lung disease appears to represent a significant clinical syndrome in HIV-negative women thus supporting the theory of the Lady Windermere Syndrome. ^
Resumo:
Agents on the same side of a two-sided matching market (such as the marriage or labor market) compete with each other by making self-enhancing investments to improve their worth in the eyes of potential partners. Because these expenditures generally occur prior to matching, this activity has come to be known in recent literature (Peters, 2007) as pre-marital investment. This paper builds on that literature by considering the case of sequential pre-marital investment, analyzing a matching game in which one side of the market invests first, followed by the other. Interpreting the first group of agents as workers and the other group as firms, the paper provides a new perspective on the incentive structure that is inherent in labor markets. It also demonstrates that a positive rate of unemployment can exist even in the absence of matching frictions. Policy implications follow, as the prevailing set of equilibria can be altered by restricting entry into the workforce, providing unemployment insurance, or subsidizing pre-marital investment.
Resumo:
Authors of experimental, empirical, theoretical and computational studies of two-sided matching markets have recognized the importance of correlated preferences. We develop a general method for the study of the effect of correlation of preferences on the outcomes generated by two-sided matching mechanisms. We then illustrate our method by using it to quantify the effect of correlation of preferences on satisfaction with the men-propose Gale-Shapley matching for a simple one-to-one matching problem.
Resumo:
Men's and women's preferences are intercorrelated to the extent that men rank highly those women who rank them highly. Intercorrelation plays an important but overlooked role in determining outcomes of matching mechanisms. We study via simulation the effect of intercorrelated preferences on men's and women's aggregate satisfaction with the outcome of the Gale-Shapley matching mechanism. We conclude with an application of our results to the student admission matching problem.
Resumo:
A Monte Carlo computer simulation technique, in which a continuum system is modeled employing a discrete lattice, has been applied to the problem of recrystallization. Primary recrystallization is modeled under conditions where the degree of stored energy is varied and nucleation occurs homogeneously (without regard for position in the microstructure). The nucleation rate is chosen as site saturated. Temporal evolution of the simulated microstructures is analyzed to provide the time dependence of the recrystallized volume fraction and grain sizes. The recrystallized volume fraction shows sigmoidal variations with time. The data are approximately fit by the Johnson-Mehl-Avrami equation with the expected exponents, however significant deviations are observed for both small and large recrystallized volume fractions. Under constant rate nucleation conditions, the propensity for irregular grain shapes is decreased and the density of two sided grains increases.
Resumo:
An electrically floating metallic bare tether in a low Earth orbit would be highly negative with respect to the ambient plasma over most of its length, and would be bombarded by ambient ions.This would liberates secondary electrons which after acceleration through the same voltage, would form a magnetically guided two-sided planar e beam,and result in auroral effects(ionization and light emission)upon impacto on the atmospheric E layer, at about 120-140 km altitude.This papere examines in a preliminary way the feasibility of using this effecet as an uppeart atmospheric probe. Ionization rate can reach up to 10 3 cm 3 S -1 if a tape, instead of a wire, is used as tether. Contrary to standard e beams,the beam from the tether is free of spacecrafct charging and plasma interaction problems and its energy flux varies across the crosss ection,w hich is quite large;this would make possible continuous observation from the satellite, with high resolution both spectral and vertical, of the induced optical emissions. Ground observation might be possible at latitudes around 40ø , for night, magnetically quiet conditions.
Resumo:
An electrically floating metallic bare tether in a low Earth orbit would be highly negative with respect to the ambient plasma over most of its length, and would be bombarded by ambient ions. This would liberate secondary electrons, which, after acceleration through the same voltage, would form a magnetically guided two-sided planar e-beam. Upon impact on the atmospheric E-layer, at about 120-140 Km altitude auroral effects (ionization and light emission) can be expected. This paper examines in a preliminary way the feasibility of using this effect as an upper atmospheric probe. It is concluded that significant perturbations can be produced along the illuminated planar sheet of the atmosphere, with ionization rates of several thousand cm-3 sec1. Observation of the induced optical emission is made difficult by the narrowness and high moving speed of the illuminated zone, but it is shown that vertical resolution of single spectral lines is possible, as is wider spectral coverage with no vertical resolution.
Resumo:
La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.
Resumo:
The parsec scale properties of low power radio galaxies are reviewed here, using the available data on 12 Fanaroff-Riley type I galaxies. The most frequent radio structure is an asymmetric parsec-scale morphology--i.e., core and one-sided jet. It is shared by 9 (possibly 10) of the 12 mapped radio galaxies. One (possibly 2) of the other galaxies has a two-sided jet emission. Two sources are known from published data to show a proper motion; we present here evidence for proper motion in two more galaxies. Therefore, in the present sample we have 4 radio galaxies with a measured proper motion. One of these has a very symmetric structure and therefore should be in the plane of the sky. The results discussed here are in agreement with the predictions of the unified scheme models. Moreover, the present data indicate that the parsec scale structure in low and high power radio galaxies is essentially the same.